The White House was hacked twice in the last 12 months. Apparently, there was a hack reported on 4/7/2015. As we all know, security problems happen. Three things are going to happen at a technology level that will eliminate security issues in the future. None of it has to do with encryption. I now view encryption as a means to slow down disclosure of information, but not eliminate it. The three things that will address security will be Whitelisting; Reductionism; and Machine Learning.
Exploits work best through blindspots known as undefined behavior. Undefined behavior is a huge breeding ground for security exploits. Dealing with such things begins with having the right concept. Such a concept I wanted to write about I call Whitelist Computing. The idea of a whitelist is well established in the field of computer operations. Often, I see it used in certain programs. An example is Ad Block Plus. If you understand what Ad Block Plus does, that is the concept I am talking about. The idea is to apply this at a broader level. Operating systems, programs, and capabilities would be defined according to a whitelist model. Proper implementation of a whitelisting methodology at each tier of the computing hierarchy would work together to greatly reduce undefined behavior.
I did not know that Microsoft could pair down something as traditionally substantial as Windows Server by 90% to produce Windows Server Nano. They indicate this will reduce security attack surface by a corresponding amount. The point is that software that continually evolves in a simpler direction actually becomes more secure. You will see this with Google’s efforts with CoreOS and Kubernetes for cloud infrastructure in which they will process more data in more sophisticated ways using a simplified technology in the form of CoreOS. Streamlined programming languages, operating systems, and platforms will be a major way to gain higher immunity to exploits by their design.
The concept of Whitelist Computing and Technical Reductionism will be easier for machine learning solutions to deal with. A simplified footprint with clear rules of execution means that properly calibrated artificial intelligence will be able to proactively eject bits and sequences thereof that do not belong while confidently ignoring those that are fully within the scope of whitelists. Rather than be called Internet Security, this rubric of artificial intelligence will simply be part of the design of the system. That will create the kind of confidence necessary to use technology according to values of privacy and appropriate disclosure.
A Different Computing Experience for Technologists
The structures and conditions shown here are acceptable for running cloud servers for example, but the outcome would be entirely foreign to all of us in how we are accustomed to using personal computers and devices today. The secure technology of tomorrow would be far less malleable in form than the way it is designed today. The default system profile should probably should be somewhere in the middle with easily accessible mechanisms to ratchet up the security for those who want their systems to be as safe as possible. At the same time, others will need the ability to turn the dial in the other direction towards a system that functions close to the way they do today in terms of flexibility. That would serve the interests of technological creation, exploration, and diagnostics necessary to move technology forward.
The problem today is we only have one setting and it is the engineer centric setting. Progress in security will mean the ability to create a more deterministic design that rejects what does not belong in a typical scenario profile. The rise of competent machine learning systems will present an opportunity to add logic to the situation that will strengthen systems that are more manageable by virtue of their increasing simplicity.