A simple process exists to have a complete software development environment in a single file. The purpose of this article is to save others the weeks and hours it took me to put it all together as a streamlined system. I will state the obvious at times, but in the end, this article will show you a way to sustain a sand boxed operational environment.
The basic items discussed is HyperVisor setup, clean Host OS concept, and cli access to Guest OS. This article represents a point of view. Namely, Host OS setup should be disposable and Guest OS use is the way to perpetuate an operational environment.
Default operating system. When your computer boots up, you have a default operating system you log into. You typically check email, search for and access websites, write documents, and keep track of information in this environment. It is your default environment. When discussing virtual machines, the default operating system is called “the” Host OS.
Sand boxed operating system. Another operating system you run inside the default operating system. After you log into your default operating system, you can launch a sand boxed operating system. It is just like a default operating system with the exception that all the files that make up this sand boxed operating system are kept inside a master file. When the sand boxed operating system is shut down, all the settings and changes are saved to this master file. It is called “a” Guest OS.
There is only 1 Host OS that can have 1 or more Guest OS instances. Remember that a Host OS is your regular environment. It should be setup as simple as possible. A Guest OS is your specialized environment setup for unique activities. I call a Guest OS a sand boxed environment since that more accurately describes the intent.
More About the Sand boxed Operational Environment
A single file contains an entire system. A piece of software called a HyperVisor loads this file and produces an operating environment from the contents within the file. You log into the Guest OS as you would the Host OS.
A sand boxed operating system may be used with minimal accommodation. All of your programs, files, tools, processes are kept inside a single file. A clean barrier exists between the sand boxed environment and the outer environment that runs it. Any complexities or specialized configurations in the sand boxed environment stays in that environment. Nothing leaks over from the default environment to the sand boxed environment and vise versa other than what you allow. When you shut down the sand boxed environment, no trace of that environment remains other than a single file.
The Most Basic Benefits of a Sand boxed Operational Environment
When your default environment fails, you can get back up and running faster. You can afford to lose the default environment without a moment of worry. You are able to experiment more in the sand boxed environment without damaging the stability of the default environment. Changing your default environment becomes more trivial to pursue.
Recovery and Work Resumption Benefits of Sand boxed Operational Environments
One thing is needed to realized the basic benefits of flexibility in the choice of default operating environment. It is something we have to do anyway. Maintain current backups of data. With the enough hard drive storage, you can maintain several versions of files relating to sand boxed operating systems.
The best case scenario is that the default environment fails and you recover the most current, intact version of the Guest OS files off the physical hard drive. The worst case scenario is that, with a recent enough backup of the Guest OS files, the gap between where you were and where you end up is short enough that at least you do not lose time that would have been spent building a system from scratch. In the best case scenario, you lose no time configuring specialized environments.
Problems with Workstation Level Virtual Machines
The way I describe sand boxed operating environments sounds good, but there are a few problems if you don’t go all the way. First, if you try to use the technology in a default environment to run a “graphical” operating system, it can be a disappointing and frustrating exercise. The emphasis on frustration.
Problem #1 is that the display resolution of the Guest OS may be too constrained to be usable for more than the simplest uses. Sometimes, the problem is solvable, but not in a “out-of-the-box” way. We want something that is repeatable with as few steps as possible.
Problem #2 is that the “viewing tools” that come with virtual machine technology can be a rather constrained way to interact with the sand boxed environment. This becomes an issue when you want to Alt-Tab between your web browser, document editor, and other program in your default environment and information in the virtual machine. We need an approach that reduces productivity barriers.
The main solution is to control and access virtual machines from a cli-based tool.
Setting up virtual machines is straightforward. Even if you did not read the many articles on the Web that describe how to do it, you could easily stumble into it. Backing up virtual machines is obvious once you know where the files are stored.
What may not be clear is an A to Z process to accomplish this. See the section below called, The Approach, for more details.
A concession at this point is to assume that the virtual machine technology, the HyperVisor, will take care of the basic infrastructure details. The HyperVisor installation process takes care of basic networking, storage area allocation, and process control. What remains is cli access, backups, Host OS configuration.
The main tools for cli interaction is SSH and Serial Console. Each Guest OS must be setup for Serial Console. It is not on by default. Each Guest OS must be setup for SSH, it is not on by default. Your primary tool will be SSH but Serial Console is a reliable fallback.
Backup requires a single large hard drive of 128GB to store two to four versions a single Guest OS file. A 500GB hard drive means that you can have more versions of files to roll back to particularly when dealing with a Guest OS that may deal with large personal databases. The hard drive should be fast and the Guest OS size should be less than 16GB. A reliable or affordable, yet replaceable solid state hard drive in a drive enclosure using USB 3.0 can be a good option for speed and capacity.
Of course, you need to have a Live CD or Live USB to reinstall the default operating system. A good investment is to have several live usb drives to reduce barriers to recovery. A $2 usb drive will suffice. A high-speed drive means you reinstall much faster. Higher speed plus higher capacity means you are much closer to an effortless process. Then it becomes a matter of process.
The approach that follows will work with other systems. It may not function as well in situations where the system requires license activation. You can load Microsoft Windows in a virtual machine, but unlike the Linux and Unix based systems it does not adapt as well to virtual machine configuration changes. Let’s say the .img file for the virtual machine is intact, but the configuration file that describes the virtual computer is lost, I can reload the .img file under a new configuration and have the entire system ready without a hiccup. Doing so with a .img file containing Microsoft Windows which likely trigger license activation. Therefore, the process below is most beneficial to Guest OS instances that are based on some form of Linux or Unix.
The approach stated here is slanted towards Linux and Unix based operating systems such as Ubuntu primarily, Apple Mac OS X, PC-BSD, FreeBSD, and Red Hat Enterprise Linux 7. It relies on good SSH support between the primary and sand boxed operating systems. The guidance could likely be replicated for other systems, there is certainly good documentation and experience with the systems emphasized.
Point #6 and Point #7 below are very important in terms of productivity. Goal #9 discusses SSH Configuration. It goes without saying, keeping the Guest OS virtual image files backed up is crucial to the overall method. Quality, high-speed hard drives with high-capacity will make short work of going through the process from A to Z.
Goal #1 – Sustain Default Configuration Mode.
The primary operating system the computer boots up and you log into is set up in as simple a fashion as possible. Additionally installed programs are kept to a minimum. The primary operating system features web browsing, document editing, photo editing, and virtual machine loading. If the system crashes or declines significantly, the reinstalled system, out-of-the-box, will be very close to the way the system was before the reinstall.
Goal #2 – Sustain Default Configuration Backups.
Next, backups of web browser settings, downloaded files, documents, and miscellaneous files are routinely kept up to date on external storage. It could be cloud or directly attached hard drives. Whatever the destination, it must be a place that is less susceptible to crash than your primary system.
Goal #3 – Isolate Involved Configurations.
Setup virtual machines that contain the different kinds of tools, extensive configurations, and processes that go far outside the scope of the primary operating system. Maintain as few virtual machines as possible to benefit backup and recovery. Assign as much memory and processor as possible to the virtual machine leaving a good minimum left over for the primary operating system.
Goal #4 – Backup Virtual Machines.
When a virtual machine’s general configuration changes such as new programs, program updates, and general changes to the operating system defined by the virtual machine, backup the virtual machine at that time. Continuous backups are not necessary and this will reduce backup intensity. This can be further aided by keeping the data files you edit through the programs in the virtual machine on a separate backup path. Like primary operating systems, sand boxed systems can be corrupted and you will have less problems if you achieve proper segmentation of data.
Goal #5 – Update Virtual Machines on First Setup.
The first time you setup a virtual machine, apply all updates first. Get the virtual machine in as pristine a shape as you can before you start using it. Test it across reboots several times before you commit to further use.
Goal #6 – Install SSH Server and Services.
Install an SSH Server on each Guest OS. The specific implementation called OpenSSH Server may be a good choice for Linux-based and Unix-based systems. Make sure the service is set up to automatically start when the Guest OS boots up. After you setup the SSH Server, immediately log into it from within the Guest OS. Run the command “ssh localhost” and log in with the account you’ll normally use. Reboot the server. Do not do anything further. See the following reference articles below.
- Ubuntu SSH/OpenSSH/Configuring
- Ubuntu OpenSSH Server
- Start Services on Boot in Red Hat 7 or CentOS 7
Goal #7 – Configure Serial Console.
You may not use the Serial Console as often as SSH, but you still want quick cli-based access into the virtual machine in the event the SSH service is not responsive. Serial Console is extremely reliable and good to have as a fallback in a pinch. You get it for free with just a few configuration file updates. See the following article as an example for Ubuntu.
Goal #8 – Connect to Serial Console.
When using KVM/Qemu you want to be familiar with the basic method to connect to a serial console. You do it through the virsh command. More extensive coverage is available elsewhere on the Web. The basic command is as follows:
Press Enter several times until you see a login prompt. If you don’t and the console appears stuck, it is likely due to an improper configuration. You will have to return to a VNC or Spice viewer session to tweak the configuration files and retry after a reboot of the guest operating system.
Once you have repeatable certainty on the guest operating system, then you can move to the next step. Be sure virtual console is ready before you proceed. Other ways may exist to deal with Serial Console to virtual machines, but this is currently the most supported way.
Goal #9 – Review SSH Configuration.
First, know that there is “likely” nothing in your primary operating system that blocks outgoing SSH connections by default. I have verified this by reinstalling a primary operating system, installing KVM/Qemu as my HyperVisor, loading the virtual image files and proceeding to SSH connect right away.
The challenge is going to be in operating system configurations in which the SSH Server does not start by default.
**SSH Server Default Startup**
Your first step under this goal is to first restart SSH. I know that sounds counter-intuitive, but I have seen situations where restarting SSH worked better than invoking a start control. Your situation may be different. Regardless, once you have confirmed that SSH Server (called the SSH daemon or sshd) is running, try to connect from the primary operating system. If it works, continue on to the next step.
Next, make sure the SSH Server Service is set to automatically start when the Guest OS boots up. That auto start is the key to a smooth, continuous workflow chain from your desktop to the full environment in the Guest OS in a few keystrokes.
**SSH Firewall Rule in Ubuntu with UFW**
Another issue is when you enable ufw in Ubuntu. The ufw will block ssh by default. After reading a man page for ufw, a copy of the man page is on the Ubuntu website, I decided the simplest solution was to run the following command: sudo ufw allow ssh. Seemed simple enough. It worked to allow SSH connections from the primary operating system.
Spending time verifying SSH will pay off. Review the cited articles under Goal #6 as for additional background. Under KVM/Qemu, a virtual bridge will be the primary interface through which the SSH traffic will route. Knowing which ip address to use for the local virtual machines will be necessary.
**SSH Using an ip address to a Virtual Machine**
When you are ready to connect to a virtual machine, you will need the ip address to connect. Say you have virtual machine named XYZ. That virtual machine will have a unique ip address. A quick way to find the most likely ip address is to use the arp command: arp -an. Running this command will list out the ip address and the interface associated with the address. Since we know that virbr0 or some variation of that interface name is the virtual bridge supporting the virtual machine infrastructure, we can reason that addressed listed with that interface are those we can use to connect via ssh. Richard Jones goes into more details about this.
Ultimately, your goal is to do the following in terms of connecting via ssh. Issue the ssh command with a username and ip address.
Format: ssh your_username_in_the_virtual_machine@some_ip_address_associated_with_the_machine
Example: ssh email@example.com
Goal #10 – Productivity.
You now have direct cli access to the Guest OS and you may likely have the ability to transfer files back and forth using SCP. At this goal level, you are able to access most of the benefits that sand boxed operating environments have to offer. As a dual benefit, you could do many activities in a Terminal window (the cli environment of choice) and use SCP to sync files as you see fit. You can even do several edits on the primary system and post them to the virtual environment using SCP to test out the edits in the sand boxed environment.
Meeting the standards of Goal #10 means growing in proficiency in the use of a cli environment and editing tools such as vi, Emacs or Nano. With vi for example, you can do alot with knowing how to enter edit mode (pressing the letter i), changing text, going back to command mode (pressing the esc key), saving a file (pressing :wq) or exiting the file without changing anything (pressing :q!). Just those 4 major steps can get you far before you need to understand more capable ways to maneuver in a file. That takes time, but it is a process that is scalable across systems. I first used vi in 2005 and found it quite useful over the years.
Another potential benefit of SCP for example is in the use of tools like gedit in a graphical context. Gedit has plugins whereby a person may do edits on a local system and then trigger an upload to a remote system (like say a sand boxed environment on the same general hardware). That is one reason why SSH configuration is vital given the close nature between SSH and SCP. You can use NFS shares, but I reason that SSH has greater consistency in terms of applicable use (login, download/upload, inquiry, security, integration with secure services).
With KVM/Qemu, you can also attach an external USB drive. This is often, faster, more durable, and reliable that the 9p process from IBM. Using a USB drive to contain groups of files and connecting it to a Guest OS is quite intuitive. On the other hand 9p has inconsistent support may require additional configuration of a Linux Kernel. As such, 9p is less consistent or dependable. USB works across all recent KVM/Qemu setups that I can reason about.
Last, is the Guest OS itself. If you are doing pure text file work and you do not need a graphical environment at all, you will have more reliable, smaller, more streamlined virtual machines if they are server-based versions. In other situations, you may benefit from the occasional use of an IDE or graphical debugger of some sort in a few situations. Desktop and mobile apps will benefit from such graphical environments. Full stack web development will benefit from them as well. In both cases, there may be a split in the work when most of the time you are doing edits in a text focused fashion in which case you can save the extra steps involved in loading the visual tools. You can go straight cli. When you need to engage with iterations visually, you have that option and do not need to pollute the primary operating system.
Much of what has been said here is couched in the context of software development environments that are isolated away from the primary operating system. While this is an obvious scenario for this approach it can apply to other contexts as well. This is also a model for partitioning work of other kinds including for reducing security exposure and sectioning off web browsing and document editing. What it is not recommended for is anything to do with editing photos, graphics design, or 3D editing. In that case, the primary operating system remains the best avenue.