You probably heard the news that any computer device can be hacked. It is now officially confirmed that tools exist to break the security of any computer and mobile device connected to the Internet. I’ve said things like that on this blog in the past. Now, there is genuine confirmation of my observations.
What does this mean? A few things.
- About 90% of the people don’t care if they are spied on because there is no obvious harm.
- People who do care don’t have convenient options.
- Anyone exchanging money, social security #’s, photo id #’s, or sensitive conversations over anything Internet-based are making a mistake.
- The web is for public data. Encryption is useful, for minimizing website defacement, casual peeping, but it is not really a secure way to lock down data permanently.
- Session Highjacking is rampant. In many places, people are nearby doing illegal unofficial wiretapping all over the place. It is the new side gig to sell you out, keep tabs on you, or tap into your computer and online storage accounts to see what kind of files you got.
- Targeted denial of service to block certain online activities for reasons ranging from the arbitrary to sophisticated remote puppetry and observation of your reactions. Sometimes, your ISP isn’t at fault when your use of an online service doesn’t work right. It may be an actual remote operator giving you a difficult time.
I’ve observed these things for years. Finally, there is proof for the basis of my claims for the last 10 years. What do you do about all this?
- You could stop using the Internet and mobile technology.
- Go back to flip phones.
- Use the physical post office.
- Only use social media and email for things you don’t mind being public and avoid eCommerce, eBanking, and e-anything that could turn into identity highjacking or general personal compromise.
- Only use Apple computers. Nothing is guaranteed safe, but their technology mindset goes to the furthest extent, and they seem to put the most effort into security that is not for show. Unfortunately, Apple is not an option for everyone.
- Hope the tech industry enters into a hardware renaissance regarding better hardware that is more resilient to backdoors.
- Mentally resist online manipulation. Do not be afraid of being watched. Use these technologies as the public tools they are and not under the expectation of privacy.
Lack of disclosure of a security breach or absence of evidence of a breach are not evidence that a breach did not occur. Big companies exist who have never publicly disclosed they have been breached and that any information, transaction or data you’ve shared with them has been leaked out to third parties. I’ve studied reports about Malware and Spyware enough to say that the nature of those tools are such that likely just about everybody and everything has been breached who is or has been connected to the Internet.
In the future, some of the most sought out people will be those who have the least history or footprint of any kind on the Internet. With all I have stated, I still see value in the Internet and the Web, particularly for research, looking up things, and matters that can be out in the open. Primarily on point # 4 in both sets of bulleted points presented earlier.
A good description of the ebbs and flows of a career in software development by a 30+ year veteran of software development. See: Is software development really a dead-end job after 35-40? by Kurt Guntheroth. Note that exceptions do exist but may be rare.
Unless you totally give up on document formatting compatibility. The work around for LibreOffice is to distributr read-only versions of documents as PDF since PDF generally preserves the original look of documents. Also see on Quora:
Why doesn’t anyone compete with Microsoft and create a new, better Microsoft office suite? by Ed Horch.
Andrei Ion Rînea had a good response on Quora about Unpopular Opinions in Software Development. He succinctly states why he thinks ORMs are bad. I fully agree with his statements. Tools like Entity Framework, ADO.NET Typed DataSets, and other tools of their kind work well for small software development projects that will always have small data involved. If the data the program must process or the structure of a database expands in a significant way to use more of the tools in a database, these ORM tools start to fall apart. Either, they can’t handle large volumes of data as well as a hand-tuned design using hand-tuned software code can with hand-tuned SQL. Or, they can’t represent the full capabilities of a database the way you can if you hand-wrote the integration between a program, data inputs/outputs, and a database.
Good ORMs do exist. A really good one is Dapper by the creators of StackOverflow. The StackOverflow website handles roughly 1 to 5 million database-driven web-page access a day. This is from hundreds of thousands of people accessing the website at the same time. Potentially accessing the database at the same time. Even the smallest drop in efficiency multiplied by the number of accesses could have a huge impact. StackOverflow did not take any chances with the way they wrote data access code. StackOverflow opted out of the standard Microsoft tools similar to Entity Framework and now use their own. They produced something simpler, more streamlined, and more efficient and is a case study in sometimes going the custom route.
You’ll notice from this GitHub screenshot from the Dapper page, the relative efficiency of doing your own data access code versus an easy to use out-of-the-box solution. Notice that hand coded beats everything. Dapper, which is a close automation of hand-coded code is very close. If you don’t want to hand code but you want to avoid the overhead and other problems of Entity Framework, Dapper is a good choice. In general, doing your own thing or using a methodology like what they did in making Dapper is often a better choice for scalability and getting full access to the database’s capabilities.
Small update. The base class, InteractiveDisplay.hxx, was updated to keep the public definitions, but with the virtual specifier removed. Private methods with the same name and signature as the public methods where introduced adding the Impl suffix to the name. Those private methods were then made virtual. Derived classes then override those private methods. I find this to be a stylistic preference advocated by experts and I decided to try it out. You can read about private virtual functions here.
The actual code update visual graph is here.
The latest commit is on GitHub. I took the application-level code in RssReader.cxx and placed it into RssDisplay.cxx. The result of the change is the RssReader has a single responsibility rather than several. RssReader is now just a bridge between PrimarySurfaceDisplayWindow and the application-level processes. You have low-level graphics/interactivity requests/data covered by PrimarySurfaceDisplayWindow and they are converted to high-level graphics/interactivity requests/data received by RssReader. RssReader then maps the high-level request to the appropriate display screen.
RssDisplay represents the main display screen. Other screens introduced into the process would then receive the same graphics/interactivity request from RssReader in a standardized way. I introduced dynamic binding through a class named InteractiveDisplay. RssDisplay inherits from InteractiveDisplay which has virtual methods it overrides. I’ve kept use of virtual methods to a minimum and this is the only place at the application level in which such methods exist. The goal here is to enable multiple display screens conforming to a general standard for receiving graphics/interactivity data to respond to/use such data and then output to the screen within an overall screen output stage represented by the overridden UpdateVisualOutput function.
The program runs the same as before, but it can now be expanded in a more structured way in terms of how the program is defined in C++. The added structures improve isolation between the low-level part of the software from the high-level. The application-level (business end) of the program is now firmly divided from the graphics engine part of the program that specialized in the technical mechanics of forming windows, button clicks, and general graphics size changes. At the same time, the baseline is set to enable the addition of new screens in a manner that is more predictable.
A few updates on the 2016 version of the GautierRss program which is also an example of a C++14 GUI from scratch. The previous articles were based on a single makefile that used static libraries. Those libraries were manually built into a specific directory visible to that single makefile. The capability to do a static link is maintained, but I decided to set up a second makefile that uses shared versions of the libraries available from the public repository for the Fedora Linux distribution (and presumably repositories for Ubuntu, SUSE and others). Using shared libraries has several advantages as does static libraries. With two build scripts to support either scenario, I gain a bit of flexibility depending on how the solution is deployed. With shared libraries, the program can receive updates to those libraries automatically (this is not always the case, but on principle can be a good idea). At the same time, two build scripts provide an opportunity to avoid a type of tunnel vision. Two scripts allow you to verify in reality that a code base can be built for different runtime scenarios and allows you to test those scenarios.
The screenshot below shows the output when using the makefile that links shared libraries to the executable. The item in green is the executable. The size of the program is shown to be 165KB. The items with the file extension ending in .o are the object files that are linked together to form the executable. You’ll notice something interesting about their sizes later on.
The second screenshot shows the same executable built with the makefile that uses static versions of the library. This executable is largely self-contained and it shows somewhat in the size of the executable. It currently stands at 12.6MB. That’s not that large a file in a desktop program scenario. Notice that the object files are unchanged in size. That is because static builds are a function of linking rather than compiling.
An updated version of the code base has been published. One of the great things about GitHub is how it enhances the use of git source control. The following screenshot is taken from the current build shown in this article. I added operator overloading to the InteractionState class in order to simplify some of the code in the PrimaryDisplaySurfaceWindow class with a corresponding uptick in efficiency. The rest of the changes can be seen through the GitHub visualizer for this commit.