Exploring SFML Graphics in C++ – Refactoring Wave 5

Five years roughly spans how long I have spent privately researching native graphics programming technology. It is not a professional effort but a work of curiosity. Several blog posts cover my encounter with native graphics code. The effort broadened my insight beyond what I had anticipated. This article covers the evolution of that understanding and continues from the Sept. 2014 discussion of the SFML Graphics programming library.

Information on Native Graphics Code Research

A few books I have read described both native graphics and the software code normally affiliated with it. The authors of the books I read did a great job, with few exceptions, in describing philosophy, background, concepts, suggestions, practices, and possibilities in connection with the subject they covered.

Books that I have read on graphics code were authored by the likes of Frank Luna, Jonathan Harbor, Artur Moreira, Robert Nystrom, and Charles Petzold. Collectively, their work applies to native graphics across Microsoft Windows, Mac OSX, and Linux, and mobile operating systems. As I proceeded through the literature, I began to see fixed commonalities in terms of how graphics code is written, structured, and expressed across operating systems and even graphics code building infrastructure. Most graphics systems map to the same concepts.

Native code and topics closely related to it I found in books authored by Jeff Duntemann, Ray Seyfarth, Bjarne Stroustrup, Alexander Stepanov, Nicolai Josuttis, Scott Meyers, Robert Seacord, Kyle Loudon, Charles Petzold, Thomas Cormen, Milan Stevanovic, Stephen Dewhurst, Stephen Kochan, Richard Reese, Peter van der Linden, Mario Hewardt, Daniel Pravat, Norman Matloff, and Peter Jay Salzman. I saw a similar core to programming across high level and low level coding practice. The granularity and efficiency can be understood well in the study and practice of native code. Counterintuitively, that practice can improve efforts in high level design and general analysis.

A few titles expanded on terms and concepts concerning systems of logic. Authors of these titles include John Weiss, Georgi Shilov, Joan Bagaria, Imre Leader, and Timothy Gowers. Information in this area is useful in expressing ideas with greater consistency.

A Challenge to Personal Notions of Productivity

Use of managed software technologies is one of the skills I developed and used in information technology. Starting in 1999, I studied the Java platform as an introduction to enterprise level object-oriented design. At that time, I was a practicioner of procedural programming with an eye towards better modularity. I once wrote hugely monolithic web programs in VBScript and Visual Basic. JavaScript was one place where I made an exception.

Slowly, I evolved towards object-oriented design in the year 2000. Looking to the future, I studied the early forms of the Microsoft .NET technology through the Wrox Press “red books”. Previews of C# and VB.NET were available for download. A personal exploration of those technologies convinced me it was time to change course. Object-oriented design and its offshoots was the new methods I would apply in my work. They were comprehensive and immensely productive. Late in 2001, I shifted to C# and had used it almost every business day since then (plus nights and weekends until I learned work-life balance in 2010) until March 2013 when my employment circumstances went in a different direction.

Native code represented a challenge to my notions of productivity. Rarely, did I have interest in native code because I could achieve solutions in a fraction of the time and with far greater breadth of features than likely in native code. The productivity level of native code was not attractive and so I had other reasons to research it. My “professional” use of technology started in Web design and development but by 2003 or 2004, I began to create desktop programs as well. In the beginning, they were not primary solutions, they were quicker to build to erect administrative, test, and configuration programs.

By about 2007, I was doing both Web and desktop almost equally. My research into native graphics was a personal challenge to myself to better understand the underlying native technology upon which some of managed desktop technology I was applying at the time sat atop. I was operating from the hypothesis that a better understanding of the foundation could lead to better decisions in terms of the structure and integration possibilities of managed code. I saw that the nature of native code was contrary, in productivity, to what I understood about managed code.

Initial Methodological Contrasts

Increased private study of native graphics lead to a need to better understand native code. Both showed me a structural foundation that was similar to the procedural programming practice of my past before my adoption of object-based approaches. It was a stark contrast to what I had become accustomed. At times, I had difficulties with what I perceived was a regression in my model of software development. The words that I read in the books were true and they were expressed effectively through straightforward procedural code. Not in every case, but as a general matter of form.

After 5 years of sparse time reading materials that did not align as much with the technical literature of the previous 9 years, I was able to better balance and broadened my sense of several areas of technology. The journey did not make me an expert but made me aware that multiple ways of building a solution are valid in their appropriate contexts.

You can do object-oriented, component architecture in the C programming language but modular procedural programming sits very well there. I saw many examples of the latter in my evaluation. Procedural programming is not something you have to relearn, it is programming, but we often choose approaches that eliminate the pitfalls of a pure procedural approach while describing those approaches according to the qualities they impart. What I did have to learn is that procedural programming is acceptable despite having long accepted the idea that it was not. The evidence of its validity may exist in its continued exercise in deeper levels of technology infrastructure and simulations. It has stood the test of time.

Relative Strengths of Code Technologies

A gradation from machine-broad technology to application-broad technology is necessary given the design of the machine(s) application-broad technology rests upon. Assembler and C are at the beginning of that gradation. Most everything else is at the tail of that gradation. C++ and associated compilers is the best language/tool combination that sits in the middle of that gradation. Functional languages are near the extreme end of that gradation second only to pure CASE systems.

There are two primary reasons to choose a given programming language. If you do programming by trade (which I do not), you choose the language most compatible with your work style, preferences, and goals that is also in demand. That last part is very important. The second reason to choose a programming language is that it has the attributes (in itself and related libraries) to address well the solution you are devising.

No language is the absolute best, but the closest is C in terms of practical technical possibilities. Beyond that, the choice is a matter of preference, goals, and demand. My favorite remains C# for professional work, but that is just a personal preference.

Why I Choose C++ Native Graphics Research

A few of my articles covered my application of what I read. It began with articles covering the Allegro software library. Later articles dealt with the SFML software library. Before I decided to examine those libraries, I had to decide if I would pursue the research with C or C++. First, I searched to futility for an ahead-of-time compiler for C#. They existed then, but not to the level I found acceptable. That was very disappointing. Most of what I read leveraged C but I thought object-oriented design would be beneficial should the research evolve in terms of the types of models I could create around graphics libraries.

C++ was the most accomplished and easily available language system that I could access to blend together various things that I was learning while affording me seamless branching of ideas into an object-oriented form. You can do this with C but I felt the investments on this front in C++ was better addressed by the designers of that language and those who implements relevant compilers. I still decided to learn C, relearn assembly language, since much of the literature used C to illustrate ideas. I learned that you can learn C in a few days, but it takes a while to go through C++. I learned C++98, then the standard changed and I learned C++11. I feel my perserverence was worth it.

Improving My Code Structure

It is one thing to learn languages, toolsets, and conventions. Understanding design, structure, abstraction, and practices all relevant to the heightened quality use of a language is another matter. Fortunately, this information is published and need not be entirely left to chance.

The information I put into practice in my articles on the Allegro software library informed me of a few things. Those insights eventually lead to a certain structure to my solutions presented in the articles on the SFML software library. I took a pause after the 4th article on SFML in order to assess a better approach to my practice of software design. Using C++ with SFML, I learned to appreciate the RAII approach but quickly discern a problem of layering that could result in far more obtuse software if I was not careful.

To my mind, a better approach was closer to plain C (but not C-style C++) but with key design features of objects when appropriate. Of course, appropriateness is idiosyncratic but appealing to me nonetheless. If I nudge further on this idea of cleaner code using SFML and C++ that is efficient and adaptable, well, I need to think more carefully and design much better. I need to evolve beyond UML, node-based models, and semantic designs. Perhaps I need to evolve backwards or different way altogether.

Generic Programming

The connotation of generic programming stands in sharp contrast to what I believe about computer technology. I am a believer that optimized technology outweighs adaptable technology. Still, adaptable technology that is efficient sound good to me. That is what I read about in Alexander Stepanov’s book, From Mathematics to Generic Programming. Never heard of him. Picked up the book out of curiosity even though the subject seemed to contrast my optimization perspective. Mr. Stepanov, with 45+ years of experience in programming, taught me some truly great things about abstraction that I think are good concepts to retain and apply.

I am not sold on the idea of generic programming as an absolute practice. Especially, when I can see specific programming outpace it in terms of efficiency. However, I think it is highly worthwhile and may be the future.

Premature optimization isn’t a good practice and optimization can destroy adaptability and intuitive comprehension. Specific approaches, as in writing code for a particular operating system and hardware environment (Windows and x86, iOS and iPhone, Android and Samsung), is both natural, convenient, productive and far easier to tune. That is a very attractive circumstance whatever language and solution toolsets you are using.

Reading Alexander Stepanov’s work however, reflected certain thoughts I had for a while. Functions as readers of data types. Rather than have objects define the program, emphasize functions. I think that covers both procedural and functional programming on the surface. The difference with Alexander Stepanov’s approach is not to layout a fabric of functions but observe a perspective of quality functions. He offers the idea that generic, high quality functions is the superior way. The programming language does not matter but a highly efficient programming language can remove the barrier to generic, high quality functions. Pondering his ideas, I decided to adjust my practices a little.

A New Design for Native Code User Interface Program

I thought about how the design of the solution I described in my SFML articles would change through a different abstraction approach. The excerpt in the next section is an exercise in exploring some of these design ideas. Rather than a visual diagram or spontaneous seed code, I decided to try design by specification. I wrote specifications a software teams in the past but that approach declined as projects accelerated. A more organic approach involving diagrams, working prototypes, and direct iterations on code are a better fit in those instances.

As private research though, I can explore.I decided to write some of my ideas for a new solution in a structured design document. It is an effort to rigorously examine the components, relationships, and processes for a solution that may eventually be implemented using SFML, STL, and Boost. Below is such an exploration of design by specification.


Potential Design for Text Processing in SFML

Text Input Representation

Store text values in the order entered. Express the values through interactive mechanisms. Allow adjustments to the values if appropriate.

General Process

Text values are printable values of symbolic meaning to people output from a software data source. A software data source is a location holding data from a read of a keyboard key press or other stored or dynamic information. Software data representing backspace and delete key presses are truncate operation operands whose values are not shown. Input at a selected range of text deletes all text in the range with input inserted where the range begins. Text captured is shown in a geometric plane activated for input by programmatic or interactive decision.

General Description of the Text Values Set

Set T is the sequence of symbols captured for display to a geometric plane. Set T is the prime set under this process. Set T is empty. Operations may translate it into form {T : O ∪ K}.

Function d(x) = all printable characters from set U.

Set U are all symbols defined in the Unicode Standard.
Set X are all non-printable symbols as well as delete and backspace, {X : ¬d(U)}.
Set P are all printable symbols in Unicode Standard, {P : d(U)}.
Set V are all symbols defined for system specific virtual keyboards.
Set K is the printable subset of V, {K : d(V)}.
Set O are all symbols not available to set K, {O : U – K}.

Minimum Functions of Interest

Adjust the sequence of stored symbols by truncation, substitution, or expansion. Truncation is the removal of symbols that narrows the sequence. Substitution replaces a symbol with another. Expansion adds symbols that widens the sequence.

Functions may truncate, revise or expand set T. Revision substitutes values at a given position. Expansion involves values possible for set T applied to T.

Processing Relationships

Elements define the state of set T and relationship between set T and the geometric plane at a minimum. Relationships besides this is likely but not explored in this definition iteration. A few of the processing relationships that can be identified and applied to the minimum functions follows.

Keys input. This is set T. Subsequent relational elements constrain operations on this set.
Cursor position. A set of coordinates translates into where text interaction begins.
Selection range. Defines the inclusive positions of text where interaction may apply.


Pondering a Different Way to Design Systems

When designing the object-oriented way, I feel there is a tendency to think visually. Diagrams, tree models of objects, and imparting an identity to software code can be useful tools and metaphors. Objects are very convenient and you can trace down functionality very easily. I will still use object-oriented design. The capabilities are useful. A benefit of Alexander Stepanov’s approach is that when applied to object-oriented design, it may produce designs that are much sharper and less obtuse.

Really, that is the truth. I am trying to avoid the kind of obtuse design that object-oriented software can encourage. You end up creating too much code, too many layers, and too much indirection. You can, and I have, made such code run very fast. Often though, productivity with such designs can be misleading, especially optimization efforts to mitigate performance issues. I have done some research on high performance systems in 2014 and have seen a tendency for those systems to be flatter in definition.

The bottom line is the idea of creating something substantial with less code scaffold weight. It may be a unicorn, but probably a worthwhile pursuit regardless. Anyway, Alexander Stepanov seems to have expressed some ideas and provided some examples in his books that may be a step in the right direction.

Compare the example above to the UML diagram in my first SFML Graphics in C++ article. I believe the approach above has stronger coherence. Future updates on this topic will appear on a blog I set aside for this and other research that presents technical code called the Michael Gautier Technology blog.

Related Articles:

Third Series on Cross Platform Program Creation:

The first series involved writing a program using the Allegro library. This article is the last of the second series. The process continues in a third series that goes much further in the organization and structure of the program. It departs considerably from the object-oriented design established in the prior two series. The third series can be found at the following link.

Build a Cross-Platform C++ Program with SFML

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s