When Technology Imitates Life

“All truly wise thoughts have been thought already thousands of times; but to make them truly ours, we must think them over again honestly till they take firm root in our personal experience.” – Johann Wolfgang von Goethe

The history of computers is usually told as a history of products. I prefer to think about the history of computers as the story of people behind inventions: the philosophers, the inventors, and the advisors who steered great minds to incredible breakthroughs. I often wonder what prompted an inventor to work on one idea while passing on others. Were they influenced by mentors? Was it a personal choice? Was it because of the availability of funding to implement an idea? As we look at those decades-old decisions, do we still believe that they were the right ideas to pursue at the time? And what implications did they have in the long run? For instance, if we could go back to the original inflection point when these decisions were made, and make different decisions and take different paths, how would that affect the world as we know it today?

Opening up our imagination to these questions is an interesting thought experiment: it lets us re-examine history after seeing the benefits and repercussions of an idea. We can review a situation using different lenses, evaluate consequences of actions, and hopefully learn from it to make better decisions in the future. 

Most of the historical figures that have accomplished great feats were knowledgeable in many of the big disciplines, such as philosophy, physics, math and biology. They were curious about the inner-working of things. They were able to connect far-reaching ideas across disciplines and use them routinely. These fields-of-study have a profound impact on life and, the more we pursue them, the more equipped we are to make better decisions. They help us understand cause and effect, correlation and causation. Leveraging different disciplines to explore ideas exemplifies the basis of diversity: we want people with unique experiences and perspectives sitting at the table, thinking wide and far, helping us look at situations with multiple lenses. Diverse exposure shapes what we think and how we think. It opens up the possibility of exploring the unknown and helps us identify connections and opportunities, simplify complexity and remove blind spots.


Claude Shannon, a mathematician and electrical engineer, embodied this trait. Shannon’s Master thesis from MIT in 1937, “A Symbolic Analysis of Switching And Relay Circuits” was based on an idea proposed to him by his advisor Vannevar Bush. Ten years earlier Bush, a professor and dean of engineering at MIT, had built a prototype machine called the “Differential Analyzer” which was an elaborate mechanical system of gears, pulleys and rods controlled by electrical relays, considered the best computing machine at the time. As Shannon’s advisor, Bush recommended that, for his Master’s thesis, he try to discover a theory to organize the design and operation of his analog machine. 

Before joining MIT, as an undergraduate student at the University of Michigan, Shannon had taken an elective class in the philosophy department. The class covered the work of another mathematician and philosopher, George Boole, who a hundred years before had published a paper called “The Laws Of Thought”. Boole’s paper addressed the fundamental laws of the operation of the human mind, and established the new obscure concept of “Logic” to describe it. In his work, Boole reduced mathematical variables to either yes or no, true or false. Shannon’s understanding of Boole’s work from his undergraduate elective class helped him conceive the idea that, since electricity powering Bush’s analog machine could be on or off, it could also be represented by Boole’s variables true, or 1, and false, or 0. In his thesis Shannon described how relays could perform the operations of binary arithmetic in hardware and how, an obscure boolean theory invented in the mid-1800s and only being taught at universities in philosophy departments, could represent the workings of switches and relays in electronic circuits. 

The implications of Shannon’s discovery were profound. His master thesis has been described as one of the most significant theses of the 20th century, winning Shannon a Nobel Prize. When asked about his idea to map the operation of relays to Boole’s Logic, Shannon stated: “it just happened that no one else was familiar with both fields at the same time”.


There is a thin line between success and failure. In his breakthrough novel “The Sun Also Rises” Ernest Hemingway wisely wrote that success and failure always happen in two ways, “gradually, then suddenly.” If we look back and analyze what led us to a point in time, we can usually see where we went right or wrong. When it comes to technological innovations we tend to focus on the suddenness of events. Yet it is the gradual events that should capture our attention. Excellence requires consistency, endurance, self-reflection and wisdom. All these qualities propel us forward, gradually helping us improve, until we find ourselves “suddenly successful”. 

Going back 200 years, the history of computers is usually told from the perspective of products and inventors, such as Babbage’s Difference Engine, von Newmann’s Store Program Machine, Hollerith’s punch cards, Bush’s Differential Analyzer, Turing’s Universal Computing Machine, Hopper’s first computer language, the floppy disk, ethernet, transistors, artificial intelligence. We tend to look at each one of these inventions as ideas perceived by individuals at different points in time. But the reality is that each one of these ideas is an extension of decisions made by previous discoveries and some certainly will be the building blocks which precede future inventions. These ideas can also be traced back to advisors and mentors guiding creators to pursue them. The role of mentors and advisors is usually overlooked; their guidance has steered decades of innovations. 

In his book “Where Good Ideas Come From”, Steve Johnson argues that innovative thinking is a slow, gradual, and networked process in which “slow hunches” are cultivated by exposure to seemingly unrelated ideas from other disciplines and thinkers. He correlates themes of ideas to the study of other disciplines, arguing that this makes theoretical sense because of their tendency to effectively explore situations with different lenses or perspectives. 

Stuart Kauffman’s theory of the “adjacent possible” also offers a predictable path of generating ideas by expanding the realm of what we can do. His findings, originally crafted for biological systems, are applicable to any complex adaptive system. The basis of Kauffman’s theory requires leaving our current disciplinary boundaries in order to explore additional possibilities. 

Goethe, a successful German poet, playwright, novelist and theatre director, was equally interested in science and philosophy. Besides his extensive artistic work he also published “Metamorphosis of Plants” and “Theory of Colors”, introducing the concepts of refraction and the notion of “homology” in leaf organs.

Outliers throughout history have been able to successfully link ideas across disciplines to introduce new realms of possibilities. 


It took Shannon’s 1938 boolean switching master’s thesis idea ten years to hit critical mass and become visible. During that time, and again under the guidance of his university advisor Vannevar Bush, he put his thesis work aside to pursue a completely different topic in yet a more remote discipline. Bush was then president of the Carnegie institution in Washington, D.C. and had received funding for a genetics laboratory at Cold Spring Harbor in Long Island, New York. He suggested Shannon apply his theory to the field of population genetics. 

“It occurred to me that, just as a special algebra had worked well in his hands on the theory of relays, another special algebra might conceivably handle some of the aspects of Mendelian heredity.” – Vannevar Bush

During the next ten years, Shanon mastered the field of population genetics and published his PhD dissertation “An Algebra For Theoretical Genetics”. He created theories and equations to describe genotype frequencies, describing algebraically how genes carried across generations. In 1948, with his PhD dissertation completed, Shannon returned to his original boolean switching work by publishing “A Mathematical Theory of Communication”, which established much of the terminology used in the information field today including the term “bit” to describe a binary digit. This last paper was the foundation for other scientists to apply his boolean switching theory to a range of fields, which helped bring about the digital revolution.

If Shannon had not found the correlation between electricity’s on-off states and Boole’s hundred year-old Logic, is it conceivable that someone else would have taken up this line of study? Maybe, with time. But the timeline of technology products and innovations, and most certainly the digital revolution that followed, would have played out very differently.  

And if Shannon had continued his work on genetics, could we be in a better place today in understanding how biology can be engineered? Absolutely. Biology is the result of evolution and not human development. More focus and a better understating of biology’s theories and life protocols would have greatly benefited innovation in this space. Having someone like Shannon look at the science behind how life protocols work, and articulate theories that can explain biological processes, would have been very powerful at the onset of the digital revolution. Maybe we could have used these protocols as the foundation for what we built. We are still in the early stages of discovering biology, and nowhere close to being able to engineer it. This includes the understanding of how human consciousness works, how the brain works, how neurological responses are triggered in the brain to emotions such as love, fear, anxiety, and understanding brain processes to be able to engineer artificial general intelligence. 


For the last two thousand years we have used evolving metaphors to try to explain the meaning of human intelligence. This is best described in George Zarkadakis’ book “In Our Own Image”, and further described by Robert Epstein in “The Empty Brain”:

“In his book In Our Own Image, the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence. 

In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.

The invention of hydraulic engineering in the 3rd century led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.

By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as Rene Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain.

By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence, again metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph. 

In 1958 John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’, primarily digital. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain.”

This last and most recent metaphor by John von Neumann to describe the meaning of human intelligence is only a few decades old and is highly rooted in the idea that humans are information processors like computers. And by proxy, it is also highly rooted in Shannon’s boolean logic theory. The timing of Shannon’s work is conducive to the current state of artificial intelligence. 

Programming of computers has become more sophisticated in the last few decades, from programming specific rules using deductive logic to letting computers infer and learn rules from their environments using neural networks and inductive logic. But if the goal is to build machines that can objectively reason, anticipate future events, have emotions, imagine possibilities beyond what we train them to learn, we are still far away from getting there let alone exceeding our own mind. Envisioning the brain only as a computer, without taking into consideration biological processes and protocols as a foundation for how we look at this space, has resulted in many blind alleys in artificial intelligence research.


Unlike our heavily engineered computational processes, life runs on universal protocols. The fact that a plant, for example, physically exists means that it is by default compatible with its environment using a defined protocol: the plant has a root system that absorbs water and nutrients from the soil, the plant has leaves that process sunlight through photosynthesis to generate energy in the form of sugar, the soil has mycorrhizal fungi which seek out phosphate and other nutrients and bring them to the plant for nutrient exchange between plants, etc. Protocols define the way two entities communicate, interact and behave. They define the behavior that an asset must exhibit in order to communicate with another asset. 

We started building the internet in a similar way, with open protocols and networks. Originally this included protocols such as TCP/IP to define the network, HTTP to describe web data exchange, SMTP to articulate how to send and receive mail, NNTP to access usenet and news, RSS for social network interactions, etc. Protocols and computer languages were open, and everyone could use them. Everyone could be on the internet, exchange mail, read news. 

Throughout the years we have shifted away from building protocols to building contained environments. These environments are usually walled-in, and at times opened up through the use of what we call “interoperable assets”. The notion of interoperable assets is a misnomer because these assets don’t physically exist, they only exist virtually in data. For a virtual asset to understand another virtual asset, the latter would need to accept information from the former and then run code to operate it accordingly. 

To define our virtual technology assets we use hundreds of different file formats to structure and store data. Our systems where this data resides are highly fragmented, full of code customizations. As a result, the definition of an “asset” is typically limited to the implementation of the asset, within that walled environment, in a particular system, at a point in time and in a limited capacity. 

Today there are hundreds of file formats used to structure and store our data. When we create artificial intelligence models we can choose from dozens of popular realtime engines to develop, train, and run the models. As a result, almost all models that we create are incapable of understanding what each other considers to be an asset, such as a “plant”, let alone being able to use the representation, or code, for that asset across disparate models. 

Interoperability on the internet turned out to be very difficult. 

Another example which helps to illustrate this difficulty is the evolution of APIs (Application Programming Interfaces). APIs have been around for as long as we’ve had programming languages. They have been enabling innovation for decades by allowing companies to access functionality provided by other companies, instead of having to build it themselves. They are mainly designed to be consumed by software and are mainly used by other applications to access data and services. 

The proliferation of APIs raised the need to create tools to manage them, followed by the creation of marketplaces to find them, followed by tools to address the challenges of integrating APIs to other systems. Connecting systems through APIs is no easy task as there is no standardization among APIs. This led to the creation of API connectors to remove some of the additional complexity. The proliferation of technological complexity with APIs alone is abundant.


We are still in the process of discovering the protocols that govern life. 

The biological “protocol” of grafting is one that is particularly interesting. As an engineer, I can easily appreciate the correlations between grafting as a protocol and technical protocols such as TCP/IP. Both protocols follow the notion of “layers” to establish connectivity, deliver packets of data, handle communication and error conditions.

Grafting is a horticultural technique where tissues of two different plants are joined together as a form of asexual propagation. One of the plants, the rootstock, is selected for its roots typically because of certain qualities that it provides such as drought tolerance, sturdiness, or dwarfing. Another plant, called the scion, is selected for its stems, leaves, or fruit. The practice of grafting can be traced back 4,000 years to ancient China. It took 2,000 years for practitioners to learn that not all plants could be grafted together; plants lacking vascular cambium, for example, cannot normally be grafted. And it was only recently that scientists discovered that the graft connection enables RNA, protein and DNA (the plant’s “data packets”) to be transported between scions and rootstocks via vascular tissues, producing physical variations in the plant to better adapt to an environment. This exchange of genetic material through the graft union enables the end product to inherit beneficial traits from both plant materials, providing a molecular basis for grafting-induced genetic variation.

TCP/IP defines how two computers address each other and send data to each other, while a plant’s “grafting protocol” follows a similar design to the layers of TCP/IP to establish scion and rootstock communication, i.e.: ruptured cells from the scion and rootstock make contact, the cell communication network is established, cell division and callus formation kicks in, callus differentiation is triggered, and vascular tissue reconstructs. 

However, unlike with TCP/IP where we artificially designed and understood the intricacies of data packets, data transfer, data loss, short and long distance latency, etc, what goes on after the graft union is established is still not well understood. What exactly is transported, or what does the plant “data” look like once the graft is established, and how far RNA and DNA travel across the graft union is yet to be discovered by scientists. We haven’t spent enough time looking at this space to fully understand it yet.

There are an infinite number of life protocols that dictate how life processes work. Most of them are still unknown or understood, yet we have been using them efficiently for billions of years. Understanding these time-tested protocols can help us design simpler and more efficient products. They could be great foundations, or building blocks, for the next few decades of innovations.


Decisions made at inflection points of technological breakthroughs create a chain reaction of building blocks for future innovations. Historically, some of these inflection points have been very beneficial in the long run, while others have sent us on paths that are not ideal for the common good. Gradual add-on innovation, on top of bad decisions throughout the last few decades, may be impairing our progress. 

Looking back at these decisions, and understanding why they were made and their limitations, is an important exercise to understand the cause and effect of our choices. 

With the complexity of the current path of technology, perhaps the answer would be to explore time-proven life processes which have been running since the beginning of time to design protocols and products that resemble what has worked well for billions of years. This requires that people designing technology products have also immersive knowledge in other disciplines, such as biology, chemistry and physics. The foundation that we can build with such an approach would be a fitting second act for the next generation of technology.

Leave a comment