Tags – history of computer coding
Charles Babbage is the father of modern digital computers.
Ever since he invented the difference engine in 1822, machines have required some sort of instructions to function.
Although those instructions were much complicated and difficult to work with, the kind of high-level language coding we are familiar with today had its foundations in those early days.
The biggest change that occurred between then and the last 80 years is that in 1942 the US government built the ENIAC.
It replaced instructions to machines from physical motion to electrical signals.
John Von Neumann’s Work at IAS:
- The “shared-program technique”: it says hardware should not be hardwired for every instruction and complex programs should be written to do that task.
- The “conditional control transfer”: based on logical statements a Computer should be able to use conditional statements like “IF” and “THEN” and loops like “FOR” and “WHILE” etc to perform logical statements. These statements brought the concept of libraries, which are the blocks of code that can be reused in a program.
After Von Neumann’s revolutionary work, In 1949, the language “Short Code” was introduced.
This was the first computer language for electronic devices.
Although it still required the programmer to switch statements into 0’s and 1’s by hand.
But it was a first step towards the complex languages.
A compiler is a program that takes instructions of code and converts them into machine understandable statements of 0’s and 1’s.
Execution speed increased with the compiler because the programmer was no longer changing statements by hand.
Introduced in 1957, FORTRAN was the first major computer language.
It was designed at IBM for scientific computing.
FORTRAN only had IF, DO, and GOTO statements, but back then, it was a revolutionary step forward.
Although FORTRAN was good with numbers, it was not of much use to handle input and output statements, which matter a lot in business computing.
To fulfil the needs of Business computing, COBOL was developed in 1959.
But COBOL accepted 2 datatypes, “numbers” and “strings”.
Also, COBOL had the capability to group data types in arrays and records, so the data is trackable and organized.
John McCarthy, in 1958 at MIT created the LISt Processing or ” LISP” language for AI research.
As it was designed specifically for AI, it’s original release had a unique syntax.
Programmers coded in it using parsing trees, which is a compiler-generated something between intermediate and higher-level-language.
The only type of data in it was “list” although later in the mid-1960’s version of LISP, they had other data types in it.
Algol also was the first language having a formal grammar. It implemented novel concepts such as the recursive calling of functions.
Niklaus Wirth introduced Pascal in 1968 as a teaching tool.
But the language got famous and used for multi-purposes like the development of editing systems for microprocessor machines and debuggers.
Pascal was designed in a way that it had the best features of COBOL, FORTRAN, and ALGOL.
Features like grip on input/output statements added with solid mathematical features made it a very successful programming language.
Pascal improved the “pointer” data type and added the CASE statement which allowed instructions to get branched like a tree.
Not only that, Pascal helped in the development of dynamic variables, i.e. variables that on run-time decide their data type through “NEW” or “DISPOSE” commands.
However, Pascal came short of implementing the dynamic arrays, or variables in a group, which became the cause of its downfall.
This language was developed in 1972 by Dennis Ritchie at Bell Laboratories.
All the languages before C are now non-existent. So it’s the 1st major language that is still around. Features that were lacking in Pascal are part of C language, including the improved CASE statement.
Pointers are one of the strongest features in the C language and the reason for high-speed processing to machine understanding. PASCAL developers adapted to C very easily because it was an improvement on PASCAL language.
Some features that make the C language evergreen are the dynamic variables, interrupt handling, forking, multitasking and input-output statements.
C language is mostly used in programming the operating systems such as Unix, Windows, the macOS, and Linux.
In the 1970s and 80’s a new programming method called Object-Oriented Programming was introduced. Objects are pieces of data packaged together which can then be manipulated by the programmer.
Bjarne Stroustrup created this language and it extended the C language with Classes.
C++ is commonly used in simulation softwares like games.
In the 1990s, Sun Microsystems built a portable language and named it Java. Then in 1994, the focus of Java shifted to the web, which was the “the cool thing” of that time.
Some top Java features are portability of code and garbage collection.
Future of Programming Languages
Programming languages are under development for years and they will keep developing for years to come. Every new language that gets popular is an improvement on the previous one.
The languages that lose their purpose or become impossible to improve on keep dropping from the list.
While the new ones cover their features and are still improving.
Modern programming languages are for multi-purposes, unlike the early ones.
With the invention of quantum and biological computers, programming languages of the future will be even more natural and powerful.
To learn more, get in touch with us today.
In the meantime, check our Remote Software Developer Services.
This blog was produced in collaboration with: