Classes and Objects

Encapsulation refers to the wrapping up of data and its associated meaningful functions into a single entity. One mechanism that allows programmers to encapsulate data is called a class.  Classes are also called abstract data types as they enforce the object-oriented programming concepts of data abstraction and encapsulation.

Once a class is defined, one can create variables from it. A class variable is called an object or an instance. In other words, an object is an identifiable entity having certain characteristics and behaviour as defined by a class.

Taking an example, using C++,

class Name {


      //data members and methods that remain private i.e. cannot be accessed outside the class

   protected:      //data members and methods that are protected i.e. accessible from members of their same class and also from members of their derived class (see Inheritance)


    int mem1;    void met1();

    //data members and methods that are public i.e. accessible from anywhere

} Ob1, Ob2;

Here, ‘Name’ is the name of the class and ‘Ob1’ and ‘Ob2’ are its objects. The body of the declaration can contain members, either in the form of data or of function declarations.  These data members and member functions/methods are classified into three categories: private, public and protected. These are reserved words and called access specifiers. As described in the comments, the access specifiers essentially determine where a member of the function can be used in the entire program. By default, all members of a class declared with the class keyword have private access unless otherwise specified.

To access the class methods and data members, objects are used as follows:

Ob1.met1(); //accessing public member function met1() through the first object ‘Ob1’

cin≫ Ob2.mem1; //accessing public data member mem1 through the second object ‘Ob2’

Now how does it work in the memory? For each object of the class Name, a different set of values can be stored. This means mem1 for Ob1 (accessed by Ob1.mem1) can store, say, 4, while mem1 for Ob2 (accessed by Ob2.mem1) can store, say, 20. To relate, one can compare this with a school classroom list – here, the C++ class becomes ‘A’ (for classroom section) and it can hold the names of students, with their roll numbers, personal information, marks in different subjects, etc. Thus, classes can be used to streamline storage of students’ details while selectively privatizing personal information.

See also: Inheritance, Polymorphism, Why Object-Oriented Programming is better?


Inheritance is the capability of one class of things to derive its properties from another class of things. It is an essential object-oriented programming concept.

Let’s think of inheritance from a real-life perspective – take, for example, your family. You derive some of your characteristics from your father and some from your mother. Some of these characteristics are salient, while some are prominent like the color of your hair or eyes. Even further, your father and mother derive their characteristics from their parents.

Now, how exactly does this translate into code?

Well, we have all sorts of real-world relations that inheritance can describe in programming such that some features are salient while others are visible to the user clearly. This is implemented through inheritance, having different access levels (private, public and protected).

Let’s consider another example to understand this better –  a class called ‘Shape’ defines the basic form of shapes like circles, triangles, and squares. Thus, classes ‘Circle’, ‘Triangle’  and ‘Square’ inherit from ‘Shape’ and add onto the basic structure on the basis of their individual features i.e. the specific areas, perimeters, etc of the particular shapes. Similarly, ‘Equilateral Triangle’, ‘Isosceles Triangle’ etc become derived classes from ‘Triangle’ with the specific where the definition for the sides changes accordingly.

Types of inheritance:

  • Single Inheritance
  • Multiple Inheritance
  • Multilevel Inheritance
  • Hierarchical Inheritance
  • Hybrid Inheritance
Types of Inheritance
Types of Inheritance

Inheritance has several advantages as follows: reusability, real-world relevance; time-saving; transitive nature; easier debugging.


Merriam Webster defines ‘polymorphism’  as the ‘quality or state of existing in or assuming different forms’. In simpler terms, polymorphism is the ability of a message/data to be interpreted in more than one form. Thus, one interface can be used in different situations.

There are two types of polymorphism:

When the arguments of methods are resolved during compilation stage, it is called Static Binding or Early Binding of arguments. Alternatively, it is also called Compile Time Polymorphism. This type of polymorphism is achieved by overloaded functions and operators.

Overloading means having two or more meanings and enforces polymorphism in a program.

  • Overloaded Functions

A function with the same name having several different definitions, which can be distinguished by the number and types of arguments is said to be overloaded. This process is known as function overloading.

It reduces the number of comparisons in a program, reduces its length and thus, makes it faster.

  • Overloaded Operators

An operator capable of carrying out more than one action is called an overloaded operator. For example, the addition operator with numbers carries out addition and while the same ‘+’ sign can be used for concatenation of strings too. For example:

5 + 7 = 12

‘A’ + ‘BC’ = ‘ABC’

When the arguments of methods are resolved during run time, it is called Run Time Polymorphism. This type of polymorphism is achieved using virtual functions or through function overriding.

A virtual function is a method/member function of an abstract class whose functionality (or implementation details) are given through derived classes called concrete classes. A function is virtual when the keyword ‘virtual’ is used in the declaration of it.

For example, the function draw() is virtual in the class shape (abstract class) in C++

class shape



   virtual void draw() = 0;  //virtual function (can be defined in the concrete class)


Now, there can be multiple definitions of draw() in the derived, concrete classes, thereby implementing polymorphism.

Why Object-Oriented Programming is better?

Before we can even discuss why object-oriented languages are far better for real-world applications than those that only follow procedural programming, we need to understand what this really means.

A programming paradigm defines the methodology required for designing and implementing a program using the key features of a programming language. So essentially, a programming paradigm describes the logical flow and implementation of a code. Let’s take a simple example, say you had to take the details of some five students along with their marks in Computer Science. There are several approaches to the condition, a two-dimensional array with 5 rows and 2 columns could store the names of the students with their marks or, a structure can be used to do the same thing and one might even consider using a class and create objects for the values, wherein the member functions can be used to take and display the required data.

While there are several types of programming paradigms, let’s limit ourselves to Procedural Programming and Object-Oriented Programming (OOP). Refer back to the previous example – a structure would group together the names of the student and the marks and through separate functions successfully carrying out the task – this is procedural programming. Now if we were to club the names, marks and the functions required to take the details and display them into a single class, we’d be using object-oriented programming. In OOP, Data and its associated functions are clubbed together in classes, whose instances, called objects are used in the program.

Thus, the main difference between the two is the fact that OOP encloses data and its associated functions into one using classes while procedural programming separates the two. This fact makes a program that follows procedural programming over OOP highly susceptible to design changes i.e. change in the definition of a type changes the design of the program.

There are several benefits to using OOP paradigm but the most important is the fact that it has a real-world application as it can depict relationships between objects through inheritance. It allows implementation of polymorphism, data hiding, and other such object-oriented objects. OOP languages allow for smart programming, reuse of code, easy understanding and even easier redesign and maintenance.

Let’s summarise the features of these two paradigms:

Procedural Programming
Object Oriented Programming (OOP)

How we got to the First Computer

Computers truly came into their own as great inventions in the last two decades of the 20th century. But their history stretches back more than 3000 years to the Abacus: a simple calculator made from beads divided into two parts which are movable on its rods. It was developed by the Mesopotamians and later improved by the Chinese in order to add and multiply using the place value of the digits of numbers and positions of the beads on the abacus. The difference between an ancient abacus and a modern computer seems vast, but the principle—making repeated calculations more quickly than the human brain—is exactly the same. Even today, abacuses are used which is still used in some parts of the world today.


In the 1550s, Napier’s logs and bones were developed. Also known as Napier’s rods, these are numbered rods which can be used for multiplication of any number by a number 2 – 9. In 1642, the French scientist and philosopher, Blaise Pascal invented the first practical mechanical calculator. It was a machine made up of gears which were used for adding numbers quickly. It consisted of numbered toothed wheels having unique position values, which controlled the addition and subtraction operations.

Blaise Pascal’s Adding Machine

Several decades later, in 1671, German mathematician and philosopher Gottfried Wilhelm Leibniz improved the adding machine and constructed a new machine that was able to perform multiplication and division as well. Instead of using gears, it had a “stepped drum” (a cylinder with teeth of increasing length around its edge), an innovation that survived in mechanical calculators for hundreds of years later.

Leibnitz Calculator

Leibniz is remembered for another important contribution to computing: he was the man who invented binary code, a way of representing any decimal number using only the two digits zero and one. Although Leibniz made no use of binary in his own calculator, it set others thinking. In 1854, a little over a century after Leibniz had died, Englishman George Boole (1815–1864) used the idea to invent a new branch of mathematics called Boolean algebra. In modern computers, binary code and Boolean algebra allow computers to make simple decisions by comparing long strings of zeros and ones.

Anyway, neither the abacus nor the mechanical calculators constructed by Pascal and Leibniz really qualified as computers. Calculators evolved into computers when people devised ways of making entirely automatic, programmable calculators.

In the early 1800s, Joseph Marie Jacquard invented Jacquard’s loom. He manufactured punched cards and used them to control looms in order to weave. This entire operation was under a program’s control. With this invention of punched cards, the era of storing and retrieving information started that greatly influenced the later inventions and advancements.

Jacquard’s Loom

The first person to attempt a computer was an English mathematician named Charles Babbage. Many regard Babbage as the “father of the computers” because his machines had an input (a way of feeding in numbers), a memory (something to store these numbers while complex calculations were taking place), a processor (the number-cruncher that carried out the calculations), and an output (a printing mechanism)—the same basic components shared by all modern computers.

Initially, he developed a Difference Engine Machine to calculate logarithmic tables to a high degree of precision. The machine, however, could not be made. In fact, during his lifetime, Babbage never completed a single one of the hugely ambitious machines that he designed including the Analytical engine which remained in the conceptual phase.  However, little of Babbage’s work survived after his death. But when, by chance, his notebooks were rediscovered in the 1930s, computer scientists finally appreciated the brilliance of his ideas.

Augusta Ada Byron, Countess of Lovelace, daughter of the poet Lord Byron, an enthusiastic mathematician and a long-time friend of Babbage, collaborated with him on the Analytical Engine. The Analytical Engine created plans for how the machine could calculate Bernoulli numbers. Lovelace refined Babbage’s ideas for making his machine programmable and created the first computer program. She is referred to as the world’s first computer programmer.

Toward the end of the 19th century, other inventors were more successful in their effort to construct “engines” of calculation. American statistician Herman Hollerith fabricated the first electromechanical punched card tabulator, which used punched cards from input, output, and instructions. This machine was used by the American Department of Census to compile their census data.

Soon afterward, Hollerith realized his machine had other applications, so he set up the Tabulating Machine Company in 1896 to manufacture it commercially. A few years later, it changed its name to the Computing-Tabulating-Recording (C-T-R) company and then, in 1924, acquired a new name: International Business Machines (IBM).

At the time when C-T-R was becoming IBM, US government scientist Vannevar Bush made a series of unwieldy contraptions with equally cumbersome names: The New Recording Product Integraph Multiplier. Later, he built a machine called the Differential Analyzer, which was used to carry out calculations. Bush’s ultimate calculator was an improved machine named the Rockefeller Differential Analyzer. Machines like these were known as analog calculators—analog because they stored numbers in a physical form (as so many turns on a wheel or twists of a belt) rather than as digits. Although they could carry out incredibly complex calculations, it took several days before the results finally emerged.

One of the key figures in the history of 20th-century computing, Alan Turing was a brilliant Cambridge mathematician whose major contributions were to the theory of how computers processed information. In 1936, at the age of just 23, Turing wrote a groundbreaking mathematical paper called “On computable numbers, with an application to the Entscheidungs problem,” in which he described a theoretical computer now known as a Turing machine (a simple information processor that works through a series of instructions, reading data, writing results, and then moving on to the next instruction). Turing’s ideas were hugely influential in the years that followed and many people regard him as the father of modern computing—the 20th-century’s equivalent of Babbage.

Alan Turing, at 16

Although essentially a theoretician, Turing did get involved with real, practical machinery, unlike many mathematicians of his time. During World War II, he played a pivotal role in the development of code-breaking machinery that, itself, played a key part in Britain’s wartime victory; later, he played a lesser role in the creation of several large-scale experimental computers including ACE (Automatic Computing Engine), Colossus, and the Manchester/Ferranti Mark I. Today, Alan Turing is best known for conceiving what’s become known as the Turing test, a simple way to find out whether a computer can be considered intelligent by seeing whether it can sustain a plausible conversation with a real human being.

Just before the outbreak of the second war, in 1938, German engineer Konrad Zuse constructed his Z1, the world’s first programmable binary computer. The following year, American physicist John Atanasoff and his assistant, electrical engineer Clifford Barry, assembled a prototype of a more elaborate binary machine that they named the Atanasoff-Berry Computer (ABC). These were the first machines that used electrical switches to store numbers: when a switch was “off”, it stored the number zero; flipped over to its other, “on”, position, it stored the number one. Hundreds or thousands of switches could thus store a great many binaries. These machines were digital computers: unlike analog machines, which stored numbers using the positions of wheels and rods, they stored numbers as digits.

Atanasoff-Berry Computer (ABC)

Professor Howard Aiken of Harvard University constructed Mark-I, an automatic, general purpose electro-mechanical computer, which could multiply two 10-digit numbers in 5 seconds – a record at the time.  It was a large machine, stretching 15m in length, it was like a huge mechanical calculator built into a wall. It stored and processed numbers using electromagnetic relays. These, although impressive, needed a lot of power to make them switch.

Mark I

During World War II, the military co-opted thousands of the best scientific minds: recognizing that science would win the war. Things were very different in Germany. When Konrad Zuse offered to build his Z2 computer to help the army, they couldn’t see the need—and turned him down. On the Allied side, great minds began to make great breakthroughs. In 1943, a team of mathematicians, including Alan Turing, built a computer called Colossus to help them crack secret German codes. Colossus was the first fully electronic computer. Instead of relays, it used a better form of switch known as a vacuum tube (invented in 1904 by John Ambrose Fleming).


Just like the codes, it was trying to crack, Colossus was top-secret and its existence wasn’t confirmed until after the war ended. As far as most people were concerned, vacuum tubes were pioneered by a more visible computer that appeared in 1946: The Electronic Numerical Integrator and Calculator (ENIAC). The ENIAC’s inventors, two scientists from the University of Pennsylvania, John Mauchly and J. Presper Eckert, were originally inspired by Bush’s Differential Analyzer. But the machine they constructed was far more ambitious. It contained nearly 18,000 vacuum tubes (nine times more than Colossus), was around 24 m (80 ft.) long, and weighed almost 30 tons.


Following the invention of the ENIAC, other first-generation computers such as EDVAC, EDSAC, and UNIVAC-I came up.

Generations of Computers

The term ‘computer generation’ refers to each phase of development in the field of computers relative to the hardware used. There are five generations:


First Generation From around 1940 to around 1956 Vacuum Tubes
Second Generation From around 1956 to around 1963 Transistors
Third Generation From around 1963 to around 1971 Integrated Circuits (ICs)
Fourth Generation From around 1971 to around 1990 Microprocessors
Fifth Generation From around 1990 onwards Artificial Intelligence


First Generation Computers – Vacuum Tubes

Vacuum Tubes

The first-generation computers utilized vacuum tubes for circuitry. Vacuum tubes were invented in 1904 by John Ambrose Fleming. The following is his original patent for first practical electron tube called the ‘Fleming Valve‘:

The ‘Fleming Valve’

The features of the computers from the first generation were:

  • Magnetic Drums are used as memory.
  • Punched cards and paper tape are used for storage.
  • Electric failure was a common occurrence for the first-generation computers. Hence, they were unreliable.
  • The system consumed a large amount of energy, producing a lot of heat. Thus, air conditioning was required.
  • The computers were extremely large in size, occupying the space taken up by a big room. Due to this, they were non-portable.
  • The programming for these computers was done in assembly and machine languages.
  • They could process data at a slow operating speed.
  • These computers consumed a lot of electricity.

The major computers from the first generation are as follows: ENIAC (Electronic Numerical Integrated And Calculator), EDVAC (Electronic Discrete Variable Automatic Computer), EDSAC (Electronic Delay Storage Automatic Computer) and UNIVAC-I (this was a computer built by Univac Division of Remington Rand and utilizes Control Panels with Switches).

Second Generation Computers – Transistors

A transistor was a small device used to transfer electronic signals across a resistor. The scientists William Shockley, Walter Houser Brattain and John Barden developed the transistor in 1947. Like vacuum tubes, transistors could be used as amplifiers or as switches. But they had several major advantages. They were a fraction the size of vacuum tubes (typically about as big as a pea), used no power at all unless they were in operation, and were virtually 100 percent reliable. The transistor was one of the most important breakthroughs in the history of computing and it earned its inventors the world’s greatest science prize, the 1956 Nobel Prize in Physics.

The features of the computers from the second generation were:

  • Magnetic Core is used as memory.
  • Hard-disk and Magnetic Tape are used for storage.
  • The amount of heat produced was much lesser than the earlier computers. However, cooling was still required.
  • The computers were smaller in size in comparison to those of the first generation.
  • The programming for these computers was done in assembly and machine languages.
  • Lower electricity consumption than that of the first-generation.
  • These are most suitable for scientific and bulk data processing tasks.
  • Frequent maintenance is required for these computers.
  • The input and output devices used are teletypewriters and punched cards.

The major computers from the second generation are as follows: IBM 1400 series, IBM 1700 series, and Control Data 3600.

Third Generation Computers – Integrated Circuits (ICs)

Although transistors were a great advance on vacuum tubes, one key problem remained. Machines that used thousands of transistors still had to be hand wired to connect all these components together. That process was laborious, costly, and error-prone. Thus, Jack Kilby invented the “monolithic” integrated circuit (IC), a collection of transistors and other components that could be manufactured all at once, in a block, on the surface of a semiconductor. Kilby’s invention was another step forward, but it also had a drawback: the components in his integrated circuit still had to be connected by hand. While Kilby was making his breakthrough in Dallas, unknown to him, Robert Noyce was perfecting almost exactly the same idea in California. Noyce found a way to include the connections between components in an integrated circuit, thus automating the entire process. Thus, in 1961, semiconductors or ICs were launched and led to a massive increase in speed and efficiency of these machines.

The feature of the computers from the third generation are:

  • Core Memory and DRAM chips are used as memory.
  • Hard-disk and Floppy Disk are used for storage.
  • Less human labor was required for assembly.
  • The computers became smaller, more reliable and faster.
  • The programming for these computers was done with High-Level Languages like FORTRAN, BASIC, etc.
  • Lower power consumption than those of the previous generations.
  • The input and output devices used are keyboards and printers.
  • The size of the main memories increased substantially.

The major computers from the third generation are as follows: IBM 360 series, IBM 370/168 series, ICL 1900 series, ICL 2900, Honeywell Model 316, Honeywell 6000 series, ICL 2903, and CDC 1700.

Fourth Generation Computers – Microprocessors

As the 1960s wore on, integrated circuits became increasingly sophisticated and compact. Soon, engineers were speaking of large-scale integration (LSI), in which hundreds of components could be crammed onto a single chip, and then very large-scale integrated (VLSI), when the same chip could contain thousands of components. The logical conclusion of all this miniaturization was that, someday, someone would be able to squeeze an entire computer onto a chip. The fourth generation of computers is marked by the creation of microprocessors and Very Large Scale Integrated (VLSI) circuits. These were developed by Intel (specifically by Marian “Ted” Hoff).


Fourth Generation computers became more compact, reliable and affordable. By 1974, Intel had launched a popular microprocessor known as the 8080 and computer hobbyists were soon building home computers around it. The first was the MITS Altair 8800, built by Ed Roberts. With its front panel covered in red LED lights and toggle switches, it was a far cry from modern PCs and laptops. Even so, it sold by the thousands and gave rise to personal computer (PC) revolution.

The Altair inspired a Californian electronics wizard name Steve Wozniak to develop a computer of his own. “Woz” often described as the hacker’s “hacker”—a technically brilliant and highly creative engineer who pushed the boundaries of computing largely for his own amusement. In the mid-1970s, he was working at the Hewlett-Packard computer company in California. After seeing the Altair, Woz used a 6502 microprocessor (made by an Intel rival, Mos Technology) to build a better home computer of his own: the Apple I. When he showed off his machine, one of his friends, Steve Jobs, persuaded him into building a business around the machine itself. Woz agreed so, famously, they set up Apple Computer Corporation in a garage belonging to Jobs’ parents. After selling 175 of the Apple I for the devilish price of $666.66, Woz built a much better machine called the Apple ][ (pronounced “Apple Two”).  Launched in April 1977, it was the world’s first easy-to-use home “microcomputer.” Soon home users, schools, and small businesses were buying the machine in their tens of thousands—at $1298 a time.

Steve Wozniak

Two things turned the Apple][ into a really credible machine for small firms: a disk drive unit, launched in 1978, which made it easy to store data; and a spreadsheet program called VisiCalc, which gave Apple users the ability to analyze that data. In just two and a half years, Apple sold around 50,000 of these machines, quickly accelerating out of Jobs’ garage to become one of the world’s biggest companies. Dozens of other microcomputers were launched around this time, including the TRS-80 from Radio Shack (Tandy in the UK) and the Commodore PET.

Apple’s success selling to businesses came as a great shock to IBM and the other big companies that dominated the computer industry. In 1980, IBM finally realized it had to do something and launched a highly streamlined project to save its business. One year later, it released the IBM Personal Computer (PC), based on an Intel 8080 microprocessor, which rapidly reversed the company’s fortunes and stole the market back from Apple.

The PC was successful essentially for one reason. All the dozens of microcomputers that had been launched in the 1970s—including the Apple ][—were incompatible. All used different hardware and worked in different ways. Most were programmed using a simple, English-like language called BASIC, but each one used its own flavor of BASIC, which was tied closely to the machine’s hardware design. As a result, programs written for one machine would generally not run on another one without a great deal of conversion. Companies who wrote software professionally typically wrote it just for one machine and, consequently, there was no software industry to speak of.

In 1976, Gary Kildall, a teacher and computer scientist had figured out a solution to this problem. Kildall wrote an operating system (a computer’s fundamental control software) called CP/M that acted as an intermediary between the user’s programs and the machine’s hardware. Then all those machines could run identical user programs—without any modification at all—inside CP/M. That would make all the different microcomputers compatible at a stroke. By the early 1980s, Kildall had become a multimillionaire through the success of his invention: the first personal computer operating system.

Naturally, when IBM was developing its personal computer, it approached him hoping to put CP/M on its own machine. Legend has it that Kildall was out flying his personal plane when IBM called, so missed out on one of the world’s greatest deals. But the truth seems to have been that IBM wanted to buy CP/M outright for just $200,000, while Kildall recognized his product was worth millions more and refused to sell. Instead, IBM turned to a young programmer named Bill Gates. His, then, tiny company, Microsoft, rapidly put together an operating system called DOS, based on a product called QDOS (Quick and Dirty Operating System), which they acquired from Seattle Computer Products. The IBM PC, powered by Microsoft’s operating system, was a runaway success.

Yet IBM’s victory was short-lived. Cannily, Bill Gates had sold IBM the rights to one flavor of DOS (PC-DOS) and retained the rights to a very similar version (MS-DOS) for his own use. When other computer manufacturers, notably Compaq and Dell, started making IBM-compatible (or “cloned”) hardware, they too came to Gates for the software. IBM charged a premium for machines that carried its badge, but consumers soon realized that PCs were commodities: they contained almost identical components—an Intel microprocessor, for example—no matter whose name they had on the case. As IBM lost market share, the ultimate victors were Microsoft and Intel, who were soon supplying the software and hardware for almost every PC on the planet. Apple, IBM, and Kildall made a great deal of money—but all failed to capitalize decisively on their early success.

Fortunately for Apple, it had another great idea. One of the Apple II’s strongest suits was its sheer “user-friendliness.” For Steve Jobs, developing truly easy-to-use computers became a personal mission in the early 1980s. Jobs launched an easy-to-use computer called PITS (Person In The Street). This machine became the Apple Lisa, launched in January 1983—the first widely available computer with a GUI desktop. It paved the way for a better, cheaper machine called the Macintosh that Jobs unveiled a year later, in January 1984. With its memorable launch ad for the Macintosh inspired by George Orwell’s novel 1984, and directed by Ridley Scott (director of the dystopic movie Blade Runner), Apple took a swipe at IBM’s monopoly, criticizing what it portrayed as the firm’s domineering—even totalitarian—approach: Big Blue was really Big Brother. Apple’s ad promised a very different vision: “On January 24, Apple Computer will introduce Macintosh. And you’ll see why 1984 won’t be like ‘1984’.” The Macintosh was a critical success and helped to invent the new field of desktop publishing in the mid-1980s, yet it never came close to challenging IBM’s position.

Ironically, Jobs’ easy-to-use machine also helped Microsoft to dislodge IBM as the world’s leading force in computing. When Bill Gates saw how the Macintosh worked, with its easy-to-use picture-icon desktop, he launched Windows, an upgraded version of his MS-DOS software. Apple saw this as blatant plagiarism and filed a $5.5 billion copyright lawsuit in 1988. Four years later, the case collapsed with Microsoft effectively securing the right to use the Macintosh “look and feel” in all present and future versions of Windows. Microsoft’s Windows 95 system, launched three years later, had an easy-to-use, Macintosh-like desktop and MS-DOS running behind the scenes.

The salient features of the fourth-generation computers are:

  • Microcomputer series were developed (such as IBM and APPLE).
  • Portable computers were developed.
  • Memory chips were used as main memory.
  • Hard-disks, CDs, DVDs, flash memories, blue-ray discs, floppy disks, and cloud were used for storage.
  • The computers became smaller, more reliable and faster. Their storage capabilities increased and there was a great development in data communication.
  • Computer costs came down rapidly. Thus, Personal Computers (PCs) became common.
  • The programming for these computers was done with Higher Level Languages like C and C++, DBASE, etc.
  • The input and output devices increased in number.

Input devices – Keyboards, mouse, joysticks, voice input, etc.

Output devices – Printers, plotters, speakers, etc.

  • No air conditioning was required to control the heat released because the computers came with a fan for heat discharging.

Some of the major computers from the fourth generation are as follows: Pentium (80286, 80386, 80486, P5, dual-core, quad-core etc.), Power PC, AMD, Apple Macintosh, IBM, Dell, and Several RISC (Reduced Instruction Set Computers).

Fifth Generation Computers – Artificial Intelligence

Today Computers aren’t what they used to be: they’re much less noticeable because they’re much more seamlessly integrated into everyday life. Some are “embedded” into household gadgets like coffee makers or televisions. Others travel around in our pockets in our smartphones—essentially pocket computers that we can program simply by downloading “apps” (applications).

The fifth generation of computers marks the shift to a more technologically advanced and efficient era of computers. Artificial Intelligence is the focus of this generation, where we move forward from and develop some modern applications such as voice recognition or advanced robotics. The goal of future development of computers is to be able to learn and self-organize while responding to natural language input. Computers will be able to classify information, search large databases quickly and plan on the basis of their own thinking and decision-making.

The features of this fifth generation for computers are:

  • Parallel Processing – in this, many processors are grouped to function as one large group processor.
  • Superconductors – a superconductor is a conductor through which electricity can travel without any resistance resulting in faster transfer of information between the several parts of a computer.
  • Artificial Intelligence would result in making everyday activities easier.
  • Intelligence systems can control the route of a missile and can defend us from attacks.
  • Word processors can recognize speech and can type out the same.
  • Programs are now able to translate documents from one language to another with ease.
  • AI has led to thinking machines that are capable of accomplishing beyond human boundaries. Robots have been developed to do the jobs humans do currently.
  • While AI is the focus of the fifth generation and beyond, we still look for improvements in our personal computers, phones, and other recent technology. For these, the input/output devices and the memory remain largely the same.


The Internet and World Wide Web

During 1970’s standardized PCs running standardized software brought a big benefit for businesses: computers could be linked together into networks to share information. At Xerox PARC in 1973, electrical engineer Bob Metcalfe developed a new way of linking computers “through the ether” (empty space) that he called Ethernet. A few years later, Metcalfe left Xerox to form his own company, 3Com, to help companies realize “Metcalfe’s Law”: computers become useful the more closely connected they are to other people’s computers. As more and more companies explored the power of local area networks (LANs), so, as the 1980s progressed, it became clear that there were great benefits to be gained by connecting computers over even greater distances—into so-called wide area networks (WANs).

Today, the best known WAN is the Internet—a global network of individual computers and LANs that links up hundreds of millions of people. The history of the Internet is another story, but it began in the 1960s when four American universities launched a project to connect their computer systems together to make the first WAN. Later, with funding for the Department of Defense, that network became a bigger project called ARPANET (Advanced Research Projects Agency Network). In the mid-1980s, the US National Science Foundation (NSF) launched its own WAN called NSFNET. In the 1980s convergence of all these networks produced what we now call the Internet. Shortly afterward, the power of networking gave British computer programmer Tim Berners-Lee his big idea: to combine the power of computer networks with the information-sharing idea Vannevar Bush had proposed in 1945. Thus, was born the World Wide Web—an easy way of sharing information over a computer network.

When the original idea for www was developed and millions of computers were being connected through the fast-developing internet and Berners-Lee realized they could share information by exploiting an emerging technology called hypertext.

In March 1989, Tim laid out his vision for what would become the web in a document called “Information Management: A Proposal”.

By October of 1990, he had written the three fundamental technologies that remain the foundation of today’s web:

  • HTML: HyperText Markup Language. The markup (formatting) language for the web.
  • URI: Uniform Resource Identifier. A kind of “address” that is unique and used to identify to each resource on the web. It is also commonly called a URL.
  • HTTP: Hypertext Transfer Protocol. Allows for the retrieval of linked resources from across the web.

Eventually, in April 1993, the underlying code for this would be available on a royalty-free basis, forever. This decision sparked a global wave of creativity, collaboration, and innovation never seen before. Tim Berners-Lee moved from CERN to the Massachusetts Institute of Technology in 1994 to found the World Wide Web Consortium (W3C), an international community devoted to developing open web standards. He remains the Director of W3C to this day.


Evolution of Programming Languages

Software are developed through various programming languages. Programming started with machine languages and evolved to new age- programming systems.

First Generation of Programming Languages (1GL) Early programming was done on in machine language.
Second Generation of Programming Languages (2GL) Post machine language, the assembly language programming came about. Together 1GL and 2GL are called low-level languages that are easier for computers to understand but difficult for programmers.
Third Generation of Programming Languages (3GL) These languages were largely based on the English language and hence were easier for programmers to comprehend. The 3GLs are also called High-Level Languages, for example, ALGOL, COBOL, Fortran, BASIC, C, PASCAL, etc.
Fourth Generation of Programming Languages (4GL) These programming languages are similar to 3GL but are even easier to understand with their proximity to the natural language. The most popular 4GL is SQL (Structure Query Language)
Fifth Generation of Programming Languages (5GL) The fifth-generation programming languages are used mainly in Artificial Intelligence research, for example, Prolog, OP, Mercury, etc.


Let’s take a closer look at some of these languages:


Supercomputers, the world’s largest and fastest computers, are primarily used for complex scientific calculations. The parts of a supercomputer are comparable to those of a desktop computer: they both contain hard drives, memory, and processors (circuits that process instructions within a computer program).

Although both desktop computers and supercomputers are equipped with similar processors, their speed and memory sizes are significantly different. The supercomputer’s large number of processors, enormous disk storage, and substantial memory greatly increase the power and speed of the machine. Although desktop computers can perform millions of floating-point operations per second (megaflops), supercomputers can perform at speeds of billions of operations per second (gigaflops) and trillions of operations per second (teraflops).

Evolution of Supercomputers

Many current desktop computers are actually faster than the first supercomputer, the Cray-1, which was developed by Cray Research in the mid-1970s. The Cray-1 was capable of computing at 167 megaflops by using a form of supercomputing called vector processing consisting of rapid execution of instructions in a pipelined fashion. Contemporary vector processing supercomputers are much faster than the Cray-1, but an ultimately faster method of supercomputing was introduced in the mid-1980s: parallel processing. Applications that use parallel processing are able to solve computational problems by simultaneously using multiple processors. In general, there are two parallel processing approaches symmetric multiprocessing (SMP) and massively parallel processing (MPP).

Applications of Supercomputers

Supercomputers are so powerful that they can provide researchers with insight into phenomena that are too small, too big, too fast, or too slow to observe in laboratories. Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modelling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion)

Top supercomputers of recent years

As of June 2016, the fastest supercomputer in the world was the Sunway TaihuLight, in the city of Wixu in China. A few statistics on TaihuLight:

  • 40,960 64-bit, RISC processors with 260 cores each.
  • Peak performance of 125 petaflops (quadrillion floating point operations per second).
  • 32GB DDR3 memory per compute node, 1.3 PB memory in total.
  • Linux-based Sunway Raise operating system (OS).


Year Supercomputer Peak speed
2016 Sunway TaihuLight 93.01 PFLOPS Wuxi, China
2013 NUDT Tianhe-2 33.86 PFLOPS Guangzhou, China
2012 Cray Titan 17.59 PFLOPS Oak Ridge, U.S.
2012 IBM Sequoia 17.17 PFLOPS Livermore, U.S.
2011 Fujitsu K computer 10.51 PFLOPS Kobe, Japan
2010 Tianhe-IA 2.566 PFLOPS Tianjin, China
2009 Cray Jaguar 1.759 PFLOPS Oak Ridge, U.S.

Evolution of Storage Devices

Whether it’s a personal music collection, a photo album, a computer program, or a company’s business-critical systems, data storage is a must-have for nearly everyone today. As technology has evolved, computers have allowed for increasingly capacious and efficient data storage, which in turn has allowed increasingly sophisticated ways to use it.

These include a variety of business applications, each with unique storage demands. The storage used for long-term data archiving, in which the data will be very infrequently accessed, might be different from the storage used for backup and restore or disaster recovery, in which data needs to be frequently accessed or change.

None of these new data storage technologies would be possible, however, without a century of steady scientific and engineering progress. From the invention of the magnetic tape in 1928 all the way to the use of cloud today, advanced data storage has come a long way.

Machine-Readable Punched Card

The standard punched card, originally invented by Herman Hollerith, was first used for vital statistics tabulation by the New York City Board of Health and several states. After this trial use, punched cards were adopted for use in the 1890 census.

Magnetic Drum

Taushek, an Austrian innovator, invented the magnetic drum in 1932. He based his invention off a discovery credited to Fritz Pfleumer. Electromagnetic pulse was stored by changing the magnetic orientation of ferromagnetic particles on the drum.

Williams Tube

Professor Fredrick C. Williams and his colleagues developed the first random access computer memory at the University of Manchester located in the United Kingdom. He used a series of electrostatic cathode-ray tubes for digital storage. A storage of 1024 bits of information was successfully implemented in 1948.

Magnetic Tape

As early as 1951, magnetic tape was being used in the UNISERVO system to store computer data. The UNISERVO tape drive was the primary  I/O device on the UNIVAC I computer. Its place in history is assured as it was the first tape drive for a commercially sold computer. Although tape has largely been replaced by newer methods of data storage, it’s still used, especially for storing large amounts of data. This is because of its low cost. Modern magnetic tape is usually found in cassettes and cartridges, but initially, tape was held on 10.5-inch open reels. This “de facto” standard for computer systems lasted all the way through to the 1980s when smaller, less fragile data storage systems were introduced.

Hard Disk

A hard disk implements rotating platters, which stores and retrieves bits of digital information from a flat magnetic surface.

Introduced by IBM in the late fifties and 1960s. The earlier Hard Drives were immensely bulky and costly. However, the hard disk drive (HDD) is still the most common form of internal secondary data storage (whereas CPUs and RAM are considered primary storage) in computers.

What made and keeps the HDD so popular is its high capacity, which far exceeds that of an average USB flash drive or DVD, and performance. Data on an HDD can be read and written relatively quickly. Magnetic heads read data off rapidly rotating rigid disks, also referred to as platters.


In 1966, Robert H. Dennard invented DRAM cells. Dynamic Random Access Memory technology (DRAM), or memory cells that contained one transistor.

DRAM cells store bits of information as an electrical charge in a circuit. DRAM cells increased overall memory density.

Floppy Disk

This relic of data storage emerged in the 1970s. By the early 2000s, it was almost completely out of use, replaced by sturdier, higher capacity devices like USB flash drives. A floppy disk was composed of a thin, flexible magnetic disk inside a flat plastic cartridge and lined with a fabric designed to remove dust particles from the magnetic disk. Floppy disks were produced in three main sizes. The 8-inch disk stored 1 MB of data, the 5.25-inch disk stored 1.2 MB and the 3.5-inch disk stored 1.44 MB.

Optical Storage Discs

Optical discs, including CDs, DVDs, and Blu-Ray discs, are flat, usually circular discs, generally consisting of a layer of reflective material (often aluminum) in a plastic coating. Data is stored on the discs in binary form, with binary values of 0 represented by “pits” and binary values of 1 represented by areas where the aluminum reflects light.


Blu-Ray (the 2000s) is the next generation of optical disc format used to store high-definition video (HD) and high-density storage. Blu-Ray received its name for the blue laser that allows it to store more data than a standard DVD. It can store an enormous amount of Data in its storage space of 400 nanometres.

Although still common, optical discs are currently being replaced by online data storage and distribution.

USB flash drive

A USB flash drive uses flash memory, which is non-volatile (meaning that it can get back stored data even after being powered off and then on again) and can be repeatedly erased and refilled with data – at least until the drive gets a corrupt sector. Flash drives are usually very small for the amount of data they carry. They have no moving parts and so aren’t highly susceptible to wear and tear, they’re cheap and they aren’t as prone to damage as optical discs. They also don’t rely on dedicated drives, instead using the standard USB ports included on all modern computers. Emerging into the market in late 2000, the earliest flash drives could store 8 MB of data. Today, flash drives that can store a terabyte of data are available.

Solid-state drive

The solid-state drive, which emerged commercially in the late 2000s, stores less data than the HDD but offers vastly superior read and write speeds. Whereas the average HDD reads data at about 75 MB per second, entry-level solid-state drives can read data at up to 600 MB per second. Because they contain no moving parts, solid-state disks are also far less prone to damage.

Cloud Data Storage

Improvements in internet bandwidth and the falling cost of storage capacity means it’s frequently more economical for business and individuals to outsource their data storage to the cloud, rather than buying, maintaining and replacing their own hardware. Cloud offers near-infinite scalability, and the anywhere/everywhere data access.