Future of Life Institute Concerns About AI (Artificial Intelligence)

Over 1,000 people signed a letter through the Future of Life Institute to, as an article from CNet implies, “protect mankind from machines“. The 1,000+ signatories are leading scientists, engineers, and thinkers. The attached research document cites 92 sources on wide-ranging questions, research, and analysis. I learned a few things from the articles and research document accompanying the open letter.

The Threat

The first thing I realized is that the consensus isn’t that AI is dangerous but that it could be dangerous if approached the wrong way. I agree with that. The origin of any threat from AI will be in how people construct them. Biological motivation as the prime mover.

The research paper alludes to this but does not emphasize it. Sections 2.1 and 2.2 discusses economics and ethics in an abstract way that recognizes their bearing on the issue but may be too abstract to be meaningful. The economic benefits of AI may exceed consideration about job displacement. The biological motivation as prime mover, more than anything else, means that you could see the succession of newer, more powerful AI as consequence of business decisions.

With or without AI, the industry of technology is slated to dissolve traditional employment. A few people will displace the great many far before fully autonomous machines even have the chance. Indeed, a situation in which great AI exists in an environment in which traditional employment no longer exist is far more likely to unfold. The forces of industry harnessed by a few organizations may push away people long before machines have that chance.

The Science

A very careful reading of section 2.3 reveals a few realities. Present-day computer science and technology practice is not complete enough to guarantee perfectly stable computer programs, let alone perfectly behaved AI. Section 2.3 begins with 4 questions around verification, validity, security, and control. The existence of these questions by definition proves that inadequate methods exist in the minds of scholars, researchers, and practitioners as to the creation of programs that can be completely verified, validated, secured, and controlled.

True Impact

No one will truly know if AI will pose a threat to people. The research document, on one level, reveals to me great uncertainty among those who create the science behind technology. The document is proof that while technology works, it is not complete and may never be.

What that means is that if this document is taken seriously one thing you can guarantee is more money. More money for research, more money for institutions, more money for the expansion of knowledge into these matters and matters connected ad infinitum. That could be a good thing. A catalyst for shifting the flow of investment in this area.

If the resulting research is real, then science will improve. Computer science will improve in the pursuit of these questions. The sciences that intersect computer science and benefit from the products of computer science will improve. Technology as a by-product will improve. That is awesome.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s