A digital computer user interface goes by several terms or acronyms: GUI, UI, UX, HUI, forms, screens, CRUD screens, user interface, interactive screen, page, rich client, thick client, thin client, smart client, rich interactive client, and that most lauded phrase – user experience. Despite the variation in terminology, all them have in common applied geometry. A sequence of planar forms that make information more accessible to people using computing devices such as laptops, hand-held mobile computers, and wearable computers such as smart watches.
Going back the terminology, those terms seem somewhat imprecise. Is not text on a black screen also visual? You have to graphically draw the text to implement words. Can you not see them too? Do you not interact with what you see, even if it is text with some external mechanism like a keyboard for instance? It turns out that the term, text-based interface is quite precise for that type of interaction, the term, graphical user interface or even user interface inadequately describes a computer interface mechanism composed of geometric figures, largely squares/rectangles (sometimes curves and variety of other angular forms) that are reception points for input and expression surfaces for information. I digress as I lack the strong urge to more precisely define what is commonly called a UI. Instead, I will work with the term as generally expressed and understood.
The core nature of a user interface is applied geometry to express information and identify intended action. For example, a button on a screen does not actually do anything. It is just a rectangle with words and/or images on it with the words and/or images existing within the boundary of the 4 lines that define the rectangle or square. The button serves as a symbol of action and is not the action itself. Software code examines signals that conclude a mouse, finger, or spoken word are related to a position within the rectangle, square, circle, or other planar form denoting a button and that action should follow.
When a person tunes their mind more thoroughly to a text-based interface, the more information they can harvest, translate, and process in less time compared to the equivalent UI. The same applies to the functions of both types of interfaces. The downside of a text-based interface is the time required to attain the higher levels of competence and fluency in its operation. In exchange for their limitations, UI tend to be more accessible and easier to more fully understand their range of use.
While a text-based (and its future equivalent in voice or neural input) is limited by the mind that applies it, a UI is further limited by the very geometry that defines it. In this way, a terminal screen or command-line screen is like being in outer space working across dimensions able to speak combinations of words to redefine reality, but a UI is more like working with wood or steel on earth with their ensuing limits in form and application.
Yet, UIs are more widespread (among people though non-visual programs outnumber all program types) because they connect better to our tangible sense of reality. Few things are as concrete between people and machines as a touch, a tap, or a click followed by feedback from the machine. It is a small wonder then so much effort goes into getting a UI just right as it is one of the few tangible things about a computer to which people can apply full subjective evaluation. The proper ordering of a UI is tied to the human motivation to apply the potential of the machine to the purpose of enhancing reality or their experience of it.
The human perception is that forms can transform. At the right points a circle can be evolved into various spheroids and then into other planar forms. Although most UIs are composed of planar forms composed of perfectly parallel and perpendicular lines connected at right angles, they can obviously be defined with curves and a variety of arcs. Now that we have come full circle so to speak, know that the visual forms are scaffolds upon which the mind imbues concepts, thoughts, definitions, statements, meaning, and action. In the text-based arena, the wielder of the words becomes obligated to a recurring intense application of those things commingled with the right sequence of words and keystrokes to enact transformation. User interfaces, on the other hand, provide a recurring map of thoughts, definitions, meaning, and actions encoded into the forms themselves. Since the physical mind develops in a physical reality requiring object recognition and the assignment of accurate meaning (i.e. you better know what the red glow of a stove top means) means that same mind can cohere more quickly to similar expressions in a UI with less energy required.
Concordant with the encoded thoughts, definitions, statements, meaning, and action, the combination of these things in a user interface may be understood in lesser or greater levels of obviousness. The planar and spheroid forms that make up a UI can change. That shift in form whether brief or long in duration provides the mind a means to transition naturally from one collection of concepts to another, but possibly related, set of concepts. Again, all this occurs within a hierarchy of planar and/or spheroid areas. The geometric qualities of the UI makes a tremendous difference in how concepts are known and applied, how well information is received and used.
The core of a UI is first and foremost geometry. Understanding the relationships among shapes is the starting point of forming a mechanism for presenting information and enacting action in a machine of any size or tangible type. Understanding those relationships and their arrangement within the larger coordinate plane known as the screen. No matter how complex the visuals of a UI appears to be, the core of that UI is fundamentally derived from the union of lines that produce a hierarchy of planars and various types of spheroids.