What makes performance decrease over time?
In an excellent blog post, Nikita Prokopov shares his thoughts about “our industry’s lack of care for efficiency, simplicity, and excellence,” explaining, among others, that year after year, hardware becomes more and more powerful, but applications remain slow or become slower.
I'm afraid I know what causes this phenomenon.
First cause: layers of complexity
A few years ago, web development meant JavaScript, JQuery, and, sometimes, a tiny amount of libraries carefully chosen for a given project. And that was it. If you were a beginner programmer, you weren't going far with that. If you were more skillful, you could slowly craft a large-size application, and by the time you finished implementing most of its features, you would know a lot about performance and optimization.
Today, we have Angular, and we have React, and dozens of other frameworks. On top of them, we have npm
libraries which are so easy to install that most projects start with dozens of dependencies. Got a feature in mind? There is an npm
package for it. No need to develop one yourself; just add a dependency. And then, one doesn't write in JavaScript; one uses TypeScript, or some other fancy language being fashionable, which means that the developer never wrote the code which is served to the clients and executed by the browsers.
The mainstream opinion is that all those frameworks and libraries and languages make web development much faster. With more than two years experience observing the teams working with those frameworks, libraries and languages, I would disagree. The time wasted setting up those layers of complexity, migrating from AngularJS to Angular, discussing how should redux be used, and trying to figure out , through the layers of complexity, the location of a nasty bug, is quite impressive. The time saved by an additional dependency or an additional leaky interface? Not so much.
However, those technologies are fashionable, and therefore broadly used. And, obviously, they add up not only to the complexity of the projects, but also the size. Nowadays, it is not unusual for the most elementary web pages to have a footprint of several megabytes. A few examples:
- The home page of French railway company performs 119 requests for a total of 2.8 MB (AdBlock enabled) and finishes after 8.24 seconds.
- A simple article on Medium.com wastes 1.4 MB doing 53 requests, finishing after 7.89 seconds.
- Amazon's home page takes 305 requests using up to 6.3 MB and taking 4.67 seconds.
- FedEx home page performs 86 requests, transferring 3.4 MB during 6.81 seconds.
Many of the young programmers, when starting a new application, begin by adding frameworks, tools and dozens of libraries. From scratch, even before providing value, the web application is heavy and slow. This is crazy, but fashionable.
If only this was limited to the Internet. Desktop applications have the same problem. Six years ago, I asked a group of IT students who learned C# for a few years if anybody knows how to create a simple “Hello, World!” console application from scratch without using an IDE. Nobody was able to answer. A decade ago, I was creating C++ desktop applications this way: the real desktop apps with buttons, text boxes, all that. Today, when I see all the new layers which grew around C# desktop applications, I'm not sure I would be capable of creating a Hello World using just vi
.
Mobile apps? Same thing. When I tried creating Android apps, I was amazed at the quantity of code which I had to commit to version control after creating an empty project. I'm unable to understand any of it, and I have no idea about what is happening under the hood. I know where to click in the IDE when things don't work, because I googled it. Does it make me a good developer? I hardly think so.
Second cause: developers' missing skills
Imagine yourself thirty years ago, in 1988. You have to wait for seven years to start to use Java. There is C++, but there are no smart pointers yet. There is no Stack Overflow. You can't just copy-paste code you find on the Internet, and you have to read books to learn things, because if you don't learn, C++ will bring you a few surprises. Because languages are low level, you have to have a basic understanding of how computers work. You have to be if not a great developer, at least quite skillful at what you do if you want to succeed.
Not any longer. With all the things abstracting other things which, in turn, abstract other things, you can create applications which kinda work without any particular skill. Instead of learning the basic things about a computer, you learn how to use a framework. Instead of understanding the algorithms and their quirks, you use libraries. Instead of understanding data structures, you use a high level framework which hides all the complexity.
And so, you expect that those libraries and frameworks will figure out the internals for you.
I stopped counting the number of times I've seen wrong data types being used in C# for sequences. For some reason, everything became a List<T>
, unless it's a Dictionary<T>
. I'm still surprised when encountering that “professional developers” who have more than eight years experience with C# don't know that there is such a thing as a Set<T>
, or that they use a simple list for a recursive method which should rely on Stack<T>
or Queue<T>
, or that they are unable to explain what a LinkedList<T>
is and in which cases one would like to use it.
So when those “professional developers” are writing an application which uses more CPU or memory than an average app, they still use wrong data structures by default, because this is what they have done their whole life. Then, when the app becomes sluggish, they buy new hardware to make it perform faster.
Third cause: emphasis on hardware
Developers are paid well enough. They can afford buying hardware and gadgets. Why, my mobile app is running slow? I think this is an opportunity to purchase iPhone X. Browsing websites is not fun any longer? Obviously, I haven't purchased a new PC for the last four years!
Developers are always complaining about devices. No enough memory. SSD is not fast enough. There is no USB 3. This is especially impressive at workplace. My current colleagues have laptops I could only dream about three years ago, and they still find reasons to complain.
This overemphasis of hardware creates excuses. Thirty years ago, you wrote an app, and then you necessarily tried to optimize it. By doing it again and again, you learned how to do it. Today, many of so called “professional developers” have no idea how to optimize things: they don't know the patterns they should apply, and they don't know the technology well enough.
Patterns seem to be the most worrisome thing. For too many developers, optimization consists of telling yourself one day that you should make some piece of code faster, and start changing the code randomly, based on your assumptions that those changes will make it faster. No need to measure anything. Do the change anywhere in the code, and claim that it's optimized now.
The assumptions should preferably be low level, such as “I'm sure switch
is faster than a lookup, so I'll replace all the occurrences of a lookup by a switch
.” Or something like: “I should really remove those intermediary variables, and inline those methods; do you imagine how much time is spent doing a call to a method?”
They don't know how to use a profiler. The notion of bottleneck is unfamiliar for them. And they are truly convinced they are smarter than the compiler.
Conclusion
Those three causes: the crazy level of complexity, the poor skills of software developers and the lack of expertise in optimization, caused by the emphasis on hardware, leads to the situation we have now. I'm not sure if this situation is that bad. I'm inclined to think that hardware becoming cheaper and cheaper, the increase at this level can compensate the loss at software level. I also believe that competition may bring focus to faster, smaller applications, developed by more skillful developers who care more about their users.
However, there may be a different side to it. While hardware becomes cheaper over time, there is a difference between what an average American developer can afford, and what is available for a low-income person from a poor country, and the gap could grow. I don't think that telling that we—the IT professionals—don't care about this second category of persons is a good thing. Another gap which will grow is the one between large companies which can afford the best developers, and the companies which cannot. While hype belongs to the apps developed by the first group, the applications used eight hours per day by accountants, lawyers, dentists, and hundreds of other professions are developed by the second group.
In order to stop this decline, it is essential to:
Teach the fellow developers the basic, low level things. Two of my colleagues (eight years experience in software development) admitted that they learned Assembly language in college for a few hours. Not days. Hours. Most developers have no experience with C or C++ or any low level language. Many web developers don't understand HTTP. They should practice writing apps at low level. No frameworks or fancy libraries allowed.
Data structures should be a mandatory course for every student. Hiring interviews should emphasize questions about data structures, memory footprint and Big O notation, checking that the candidate has basic skills, making him capable of understanding the repercussion, in terms of performance, of a given choice he makes during development.
Optimization should be a first-class citizen among the requirements, and developers should be taught what patterns they should apply to ensure compliance with performance requirements. Interviewers should check if a candidate has a basic understanding of compiler optimizations and that he knows what belongs to premature optimization and what is just common sense.