We all encounter a computer program that has glitches. Most of them. A common occurrence.
How much critical software from Microsoft has to be patched regularly? Quite a few. Even Windows Update itself has glitches sometimes in the course of pushing out glitch fixes.
Regular Android updates. A 5mb update here, a 20mb update there, you rack up 200mb easily just in updates. The same for iOS, Mac Os, and Linux.
Is it the fault of the programmer or the tools programmer’s use? A good discussion about that is on Slashdot:
Rapid Application Development
Companies like Microsoft wanted to make programming easy. They were joined by companies such as Borland and Sun Microsystems. This resulted in Visual Basic, Delphi, and Java as well as visual drag and drop tools to automate creating GUI screens. The success of the tools had an impact on how programmers thought about productivity. These tools sped up software development.
Deadlines, spur of the moment changes in business situations, and fear of losing ground to competitors or harming delicate customer relationships create the drive to make quickly written software a priority. Whether the largest Fortune 500 enterprise, commercial tech company, or the smallest unofficial IT shop, the drive to quickly make a result is in the background.
Do you just use the tools or really think about the structure of a system? Do you even have time to work through the various factors that can impact a system?
How often do you hear the industry advice: “just code it to work first” or some variation that reminds you that new hardware is cheaper than programmer time to optimize a system? I’ve witnessed hypocrisy on Internet forums by software experts who spent years giving some variation of this advice only to gently scold people for lacking the background to optimize.
It seems the decades long advice against “premature optimization” may have worked too well. We have all this software in existence now, but most of it either has security, performance, or in the case of driverless cars as a 2016 example, safety issues. The only real miracle is rapid development.
I’ve seen many programmers too dependent on drag and drop databinding as an example. Sometimes they lose control over the code, lose predictability, have to write extensions to overcome limitations in the databinding constructs, lose control over performance, fail to have a solution that can adapt to a variety of backend architectures.
I’ve literally seen programmers who struggle with code that did not come from a visual development process. Struggle with writing a solid solution that did not involve the significant assistance of application level code automation.
I am not against these tools. I use them as well, but more selectively. The programmer is supposed to drive this process, not be driven by the tools or vendor dogma.
Primary Reasons for Less Optimal Software
I have taken systems, ones I have built or built by others, that ran in 1 to 3 hours and sped them up, decreasing the time down to less than 5 to 7 minutes, sometimes 30 seconds. Systems can be optimized without changing hardware (not always) though it can take more programmer time.
Other systems I have intentionally designed upfront with optimization in mind and most aspects of those systems run in a fraction of a second. Hand coded, minimal to no use of the more comprehensive versions of application level code automation facilities.
Software can be slow for various reasons.
- Programmer wanted pretty code.
- Trying to follow best practices rooted in taking the easy path to building a program.
- Trying to make code like everyone else does so you fit in from a job opportunity standpoint but the program you gave your present client sucks.
- Unbalanced view of algorithms vs data.
Next, we have the matter of security. There is a such thing a 100% secure software. Or, should I say, I trust the word of PhDs in Computer Science from 1960s and 70s who describe hardware platforms they had, that was more secure. The problem is neither Intel or AMD can deliver it for their platforms due to backwards compatibility.
Even with the deficiencies in hardware, networking technology, application software does not have to be as insecure as it can be. The first concept is don’t do off by one errors. Clever pointer tricks in C may make you a genius but they can make millions of people poor by providing an unauthorized gateway into their bank accounts.
The other flaw is the concept of secure code because it runs on .NET or Java. They are just as insecure. The proof is in the patch update history for those platforms. The vendors behind those platforms marketed them as more secure. A false promise.
A programmer cannot defer their responsibility for writing a solid computer program or system to third-party tools (commercial or open-source) best-practice sound bites on Internet forums. The individual, the team must be capable of producing quality even when the limit of tools are encountered. You should be able to envision the system and implement it to a reasonable approximate implementation without the aid of visual coding tools. That does not mean you exercise that approach every time, but that you have the capacity to navigate in a direct way from concept to solution.