Programming made easy
People say, programmers won't be needed one day
When I started programming, a few people around me told me that this may not be the smartest career path, because it may happen that in a few decades, we won't need programmers any longer. The idea is that sooner or later, development will be easy enough that any ordinary person would be able to build applications and it will magically work. The idea is supported by ads of code generation tools and by the fact that an application which required a decade ago a team of ten experienced developers working for six months can now be achieved by a young dude who spent the last week learning PHP. The simplicity, a key feature of any advertised technology, stack, framework or programming language, coupled with stories of guys who built another multi-billion startup in a few weeks makes the idea look legitimate.
But it has a flaw. A programmer is a person who writes computer software: If any person would be able to make software, then everybody would be programmers. Let me rephrase it. Any non-professional programmer would be able to build applications. And that's fine, and not that much different from now, where there are plenty of self-called programmers who don't know how to code.
Any person can create small programs without learning too much stuff. You don't have to spend years learning programming, but you do have to learn at least something to create a useful program which works, given that even this small amount of things to learn would be overwhelming to most people—most people can't even learn how to use a word processor or a graphical editor.
This leads us to the notion of barriers to entry. For example, to become a brain surgeon, barrier to entry is high. One have to spend more than ten years in an university and succeed. This is a huge investment, there is a large amount of things to learn, and one should practice a lot before being able to perform his first surgery. To become a programmer, to write at least one useful program which works, one should just spend a few weeks learning stuff online. No need to buy expensive software or even books. No need to pass an exam.
Were barriers to entry more complicated in the past? They were before internet and inexpensive computing power. Today, those barriers can hardly be lowered further.
What about code generation tools and new technologies, stacks, frameworks and programming languages which will magically solve every problem I have? Code generation tools are overrated and it looks like people working on them tend to be sometimes very optimistic. Code generation works fine for simple cases where a machine outperforms an human:
Positioning controls inside a window is easy for a machine based on graphical input; an human, on the other hand, would perform badly with no visualization.
Repetitive tasks are always better done by a machine. Microsoft's T4 coupled with Entity Framework drag and drop of database objects, for example, is great.
Building of empty structure, such as from a class diagram in Visual Studio, can be helpful too (even if Visual Studio is brain damaged when it comes to some features, it's still the best one I've used).
Injection of metadata, such as the one required to profile an application, is better done by a machine without the need to alter the source code under version control.
Any intermediary code which results from compiling the original one and enables optimizations which would be difficult to handle for a programmer (example: IL code in .NET).
Aside basic cases, code generation is useless. Some people would imagine that one day, code generation will be able to create highly optimized code from a bunch of UML diagrams (they'll even quote an app which is already doing it). There's none, and it's unlikely to be created one day, because computers are not smart.
Aside, there is one crucial point: the best way to communicate intention. When it comes to controls in a window, graphical tool is the best way to communicate that this button is large and is positioned here, while that one is small and is just in this corner, but not too close to the border. This makes code generation useful here. On the other hand, programming logic itself is easily expressed through code, and it looks like no other communication medium could outperform it. This is the main reason why code generation fails for ordinary programming logic: if there is a way to have the same flexibility when using diagrams, it would be simpler to write code than to express the thing through diagrams.
Take Microsoft's Workflow Foundation. It's helpful when you need to assemble different blocks to solve a basic problem, while making it possible to change the workflow later during runtime. Now imagine a medium-scale application created with only workflows; how would that look like? Actually, even a more basic example is quite illustrative: what's easier, to write an
if statement in code, or to do it with a diagram? Should it be noted that when you use an actual language, an
if may be replaced, when appropriate, by inheritance; when you work with a graphical workflow, you don't necessarily have this option.
Now, advertisers tell us that with their new thing, anyone will be able to create a fully-featured product in no time with no programming experience needed. Great example: Microsoft Pivot. Indeed, you end up creating a working website in a few minutes with no code whatsoever. But, wait, guys from Microsoft never told that you can create only one sort of websites, and that when your customer asks for a feature that doesn't exist in Pivot, you're back to code.
Programmers believe that too
The same idea that time will make developers unnecessary, I heard it again and again during my career from my colleagues, especially the older ones. They base it on the observation that a set of elements make it easier to develop software over time. Those elements include:
A constantly increasing number of tools which are very helpful to developers: profilers, debugging tools, auto-completion, etc. This enables to build larger application easier.
Constantly rising abstractions make it irrelevant to know things one had to know in the past in order to develop any program. A C# programmer writing business software doesn't actually need to know anything about pointers, or stack and heap, or basically any low-level stuff. One doesn't need to know SMTP protocol to send emails from any mainstream language, and one doesn't have to know all those tricky things about XML in order to generate or parse XML data.
A growing amount of technologies, open source libraries and free or inexpensive APIs make irrelevant tasks which would have to be performed in the past. Recently, I've built a system which enables virtual machines to send in real time their logs to a message queue service which is then used by a web app which shows, in real time too, those messages to a user. It took me a few days, and could take much less if I wasn't procrastinating so much. That's quick not because I'm good, but because I relied on Linux, Python, RabbitMQ, WebSockets, JSON (and its parsers), Node.js, Express and a few packages for Node.js. If I had to develop this from scratch, I would possibly estimate the task to a few decades.
Methodologies, patterns and practices based on solid sixty years of experience in software development and project management. This helps producing solid software products cheaper, reducing original number of developers, as well as later maintenance (thus reducing even more the number of programmers needed).
Does all this make development easier? It depends.
If you look at applications of the same scale, it does. A developer who knows what methodologies are the most appropriate for the context, is able to build a solid architecture and design prior to coding, relies on libraries he knows and tools he used for a long time would outperform a team of twenty inexperienced programmers who don't know their tools, don't do anything except writing code and don't care about project management, team management, communication and all this stuff.
If you look at actual business problems, development remains as difficult as it was before. Agreed, I can spawn in less than a week a real-time distributed system which works pretty well using Python, Django, RabbitMQ, MongoDB and Redis, thing I could hardly do twenty years ago. But twenty years ago, I wouldn't be asked to implement a distributed system anyway: this would rather be handled by a large team of developers much more experienced than I am. The constantly increasing complexity of software makes the discussion about the obsolescence of developers irrelevant: there will always be a place for skillful developers in order to create software products for people who can't afford learning how to use a profiler or what is the difference between MongoDB and Oracle or how to handle technical debt in a team of programmers who don't care.