Up to 60 years ago “programming” meant connecting or disconnecting cables that were part of computers as big as entire rooms, in order to give commands to the computer in binary mode (on/off).
Fifty years ago and beyond, programming languages were directly focused on the processor that would run the software. No wires to unplug, but code, only very complex to do operations that today would be defined elementary with the aggravating factor of having to rewrite it from scratch for each new processor. When we talk about programming of this type we refer to the Assembly family of languages or similar.
Later programming evolved to create programming languages whose code could be transformed and made to work for a wider family of processors (like C), or potentially all of them (like Java). This magic is made possible by elements such as compilers and builders (respectively “translators” of machine language and integrators of external code such as libraries, written by other people), or virtual machines (the Java Virtual Machine or .NET for example, real layers between the physical computer and the written software) or even interpreters, able to understand the written code in real-time, during execution (as in the case of Python).
These languages, in addition to being multiplatform, have the enormous advantage of being simpler because the translation in the complex Assembly code is done automatically, for example, by the compiler. For this reason, they are called third level languages (second level Assembly, first-level binary code).
It is necessary to point out that third level languages are not all equal in syntax and basic principles. Initially, all of them were sequential, i.e. instructions were executed one after the other (except for components such as conditions and loops). Later, there was an almost mass shift to object-oriented programming, in which code is executed not sequentially, but by entering and exiting parts of code called classes, which describe certain functionality and data.
Developers who have been forced to move from the sequential paradigm, used for years, to the object-oriented paradigm have taken some time to fully digest it. It’s a completely different way of thinking (and working).
Programming with graphics or without graphics was simply programming. There was no distinction. Sites and applications for the various operating systems were made by programmers. Sure, some were better than others and more specialized in certain aspects, but that was the job.
About 15 years ago, given the explosion of new needs in the world of the web and, consequently, of new languages, the job of the programmer has been divided into two: backend, which is all the code that you can not see, but that allows the rest to work and the frontend, which is all about the graphics of the software or web application (or mobile app).
The two worlds are obviously connected, the frontend without the backend wouldn’t have much to show for it. However, these are two completely different professions, the only thing they have in common is that they write code. Frontend developers often use multiple languages together and have different development methods than backend developers.
The jump from backend to frontend (and vice versa), is long. This is exactly why the full-stack developer is a mythological being who does both back and frontend well. These figures exist, but, apart from some exceptional cases, they are people over 30, with at least 4-5 years of immersive experience in each of the two macro-families.
From all this discussion it is immediate to understand that all those courses in which it is easy to come across online, which propose to transform into full-stack people with little or no experience in a few months, except in exceptional cases create people with many ideas in their head and confused. Such preparation can provide a smattering, but to create autonomous full-stack developers some years of experience are necessary.