“The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.”
In the late 1980s, Mark Weiser coined the term “ubiquitous computing” and published several papers which culminated in his landmark paper, “The Computer for the 21st Century” (1991) whose opening lines are quoted above. He used this to term to characterize what he believed was the emerging third age of computing.
As we’ve progressed through various ages or generations of computing, we notice that not only has the kinds of technologies changed, but the very way humans interact with that technology has changed.
The first generation started in the mid 1930s with Alan Turing and others who pursued the idea of codifying and creating machines that could follow set instructions. The canonical devices during this period were mainframes whose people-to-device ratio was many to one. Many users would interact with one large machine in an impersonal way. The applications of this stayed mostly in the realm of scientific computation, cryptography for war, and later on, data processing.
Fast forwarding to the 1960s and 1970s, a second generation of computing emerged pioneered by visionaries such as Alan Kay, Ben Shneiderman, Ivan Sutherland, Douglas Engelbart and others who worked on what we now know as the age of personal computing. We could now shrink the capabilities of the mainframe into a machine that could sit on someone’s desk and introduce the beginnings of the GUI, the mouse, and windowing systems. The people-to-device ratio was one to one and helped users perform actions that could benefit them personally. Applications ranged from document processing, spreadsheets, and database management.
As mentioned above, Mark Weiser and his contemporaries noticed in the late 80s and 90s that as the form factor of these devices were shrinking, there would potentially be more and more things that could be considered computing devices. This marked the third generation of computing which Weiser called “ubiquitous.” An explosion of smaller form factors hit the market ranging from portable laptops, tape storage, compact discs, and later on, USB drives and cellular phones. The people-to-device ratio was one to many, indicating that one person would own and interact with many of these devices. These “computers” would be so commonplace that they would “disappear” in the sense that they would be so embedded into our environment and daily lives that people would not notice them as they would in previous generations. Common applications included human-to-human communication and data transfer.
Researchers tend to agree that we are currently in the fourth age of computing marked by certain “technologies” such as cloud services, crowd-sourcing (or social media), and a whole ecosystem of devices (“Internet of Things”) that can connect and communicate with one another. This marks a people-to-device ratio of many-to-many where multiple people can interact with each other and with or through many devices all at once. Some include wearable technology as well possibly moving from an aggregation of computing on a smart phone to computing on our very bodies with the introduction of health trackers, smart watches, and head-mounted displays.
So, what comes next? Is there something else that should mark the fourth age of computing? And is there a fifth age that has yet to emerge over the next decade or so?
Certain research topics include using emerging technologies include computational skin, the intersection of computing with biology and neuroscience, or user-manufactured computation allowing people to create and reproduce their own computing devices many times over.
These are visions and only time will tell where and how we choose to take the next step.