The most fundamental API for creating GUIs in most operating systems capable of such things is defined in the C language. A great example of how C can be used to create GUIs is presented in Charles Petzold’s book, Programming Windows. I read the 5th edition of that book nearly 10 years ago and that edition is the 10th anniversary edition. The original version of that book published in 1988 dates to about 3 years after the official release of Microsoft Windows. After reading that book I nearly a decade ago, I understood how granular, detailed, and intense an exercise it can be to build a GUI using some of the most core building blocks available.
At the time, I was only curious and I wanted to know more about the underlying code possibly used by more generalized UI coding packages. I surmised that more insight into how the foundation operates would better inform decisions when using more general composite coding packages that streamline the use of these elements. One day I switched from a focus on Microsoft environments to instead apply my efforts to Linux. Along the way, I became increasingly aware of the importance of cross-platform code. All the mainstream operating systems of the time had a core API in some form of C in which you could piece together a user interface with more or less sophistication.
While C was a common language in which you could define a user interface at a fundamental level, that did not mean that the same combinations of code you used to define a user interface in the C programming language on Microsoft Windows would operate unchanged on Apple’s operating system or Linux broadly. The C language was largely the same but the actual constructs of code defined by each maker of an operating system you could use to piece together a GUI was different. That meant that in order to write a GUI program that worked the same on all mainstream GUI operating systems, you had to write the program, from scratch, each time, for each operating system.
Web applications got around this by having maker of web browsers define the proper engine to translate HTML to native graphics on the operating system the web browser operated. This was great for the broad areas of information distribution and capture addressed by always connected web applications but they remained limited in that they did not have full access to the operating system like a native application. That meant a fully featured, fully capable, uninhibited GUI application that was to run on all GUI operating systems had to be specially defined for each one.
That was true, except there existed for quite some time, universal GUI creation packages that were generally provided by a commercial organization. You had companies by the names of Borland, Elemental Software, and Trolltech among others that provided general GUI frameworks in which their code package fulfilled all the necessary translations for an operating system they supported. Borland eventually sold to Embarcardero who continues to offer general GUI toolkits that work across platforms. Trolltech sold to Nokia who later sold to another company the most famed and renown GUI toolkit by the name of Qt.
Some of the earliest GUI toolkits that were both cross-platform and available for a wider audience of those creating GUIs emerged from the Linux/UNIX arena and 1 commercial organization. That commercial organization was Sun MicroSystems with their Abstract Windowing Toolkit defined in Java. AWT would eventually give way to Swing and JWT. Microsoft tried to latch onto that with J++ before Sun MicroSystems prevailed against such actions in the US Judicial system which led Microsoft to create C# and the .NET framework.
Although a Linux developer by the name of Miguel de Icaza would lead the effort to produce a version of C# that could create GUI applications on Linux, he did so by basing many parts of that Linux-based .NET atop a more fundamental Linux GUI framework by the name of GTK+ which is used to define the GNOME desktop environment in which he was also a significant contributor. Miguel would eventually create Xamarin that was later purchased by Microsoft for the creation of cross-platform mobile apps. As of this date, Microsoft has established a cross-platform version of .NET fully supported by them that includes no cross-platform GUI capabilities but there are alternatives that exist, again from the Linux world. Chief among them is GTK+ which is ported to other operating systems but which hardly anyone uses for aesthetic and functionality integration reasons. Still, the point is cross-platform GUI toolkits exist though deriving a highly sophisticated GUI evenly across platforms remains extremely elusive for many specific reason that would take hours to describe.
Regardless of the software world’s politics around general GUI frameworks like Qt, Swing, web vs. native, and so on, the most common aspect of all these frameworks is they consolidate and pre-define geometry. Yes, going back to Charles Petzold and Programming Windows, you can write a GUI with nothing more than defining where lines begin and end. Code can then evaluate if a mouse click’s x/y coordinate coincides with a drawn square or rectangle to determine if an action related to that position on the screen should occur. GUI toolkits remove much of the effort involved in this to decrease the time involved in putting a button, text entry fields, shapes with text atop them on a screen.
Oddly enough, you can remove the intricacies of drawing shapes on a screen, detecting interaction upon them, and modifying them, but you cannot eliminate the need to apply geometry in general. A given GUI toolkit may define a button in such a way that instead of writing 100 – 200 lines of code to draw a button and related mouse or touch interactions to it, you can achieve the same with 2 or 3 lines of code with all those details now hidden behind the easier abstraction. What does not go away is the decision of where (either precisely or generally) on the screen to situate the button. The button or text entry field has to go somewhere and where it goes has to make sense.
A forms generator would seem to preclude much of this. Even when a forms generator creates visual elements and determines their placement on the screen, they remain subject to a data model. The data model you imbue into the generator influences the geometry as both are related to each other. You have to define a data model such that it can be represented acceptably in a generated layout. Too many data elements and you lose space to layout and show buttons. Too few data elements and you may lack meaningful visual mechanisms to enact useful action.
At the point a forms generator or GUI abstraction does not adequately represent the data model, layout, or functions you intend, you inevitably traverse the ladder of general GUI abstractions such as buttons, text entry fields, shapes, and lines. Sometimes you have to fill in the gaps in a general abstraction when certain elements do not exist or are insufficiently represented or implemented. When existing GUI generalizations are not sufficient for the level of visualization or nuanced interactivity sought, you return to a level of intricate composition of discrete visual elements and their signal triggers. In this way, across levels of abstraction of GUI implementation, geometry, spatial orientation, and planar forms are ever-present. Primarily because the physical display screen in which they reside is itself a planar area principally of width and height. At some point then you may encounter the C language or something close in proximity to it in terms of effect in the native environment and the most intricate concerns involved in geometric representation.