In 300 bc Euclid, a Greek mathematician and philosopher, documented what is considered to be the first algorithm in his 13-book compilation “Elements”. The Euclidean algorithm serves as an effective method for finding the greatest common divisor of two whole numbers. The algorithm has been a building block for computational and mathematical applications; it is considered to be one of the fundamental algorithms in history and, two thousand years later, continues to play a crucial role in our lives.
We use the Euclidean algorithm with cryptographic systems, particularly with public-key encryption, facilitating operations such as key generation, encryption and decryption (we use it every time we use the internet). It is used in error detection, error correction and checksum methods when transferring data. It is used in computer graphics to resize and scale images, maintaining aspect ratios and preserving proportions. It is also used in music theory to understand rhythm and create musical patterns, evenly distributing beats within a given time period. Euclid’s algorithm has been contained in tens of thousands of industry products throughout the modern era, but the concept of the embedded algorithm is not unique to this particular scenario.
Hundreds of algorithms have followed a similar path, some of them changing foundational thinking across multiple domains. Dijkstra’s algorithm, for example, is used to find the shortest path between nodes in a graph. We use it every time we fly on a plane, or use a GPS to search for directions between two points. Prim’s algorithm, which is used to find the minimum spanning tree of a connected weighted graph, helps determine the optimal placement of power substations and transmission lines. We use it to build infrastructure, utility networks and pipelines. We use the Fast Fourier Transform algorithm every time we watch videos, use image compression, use radar or sonar systems, and process music to do pitch detection, sound synthesis or audio equalization. And Backpropagation, an algorithm used to train artificial neural networks, has been used for years for financial forecasting, autonomous vehicle navigation and speech recognition.
Algorithms are embedded in our daily lives. They are in our electronic toothbrushes, digital newspaper applications and baby monitor devices. We use them first thing in the morning with our alarm clocks, when we drive to work and when we pick a movie to watch from a streaming service. They automate our routines and make us more efficient and productive.
The increasing role of algorithms in various aspects of our lives continues to grow exponentially; they continue to evolve and optimize our lives. Many people have found the advent of algorithms to be a welcome addition to their daily routines.
From GANs to Transformers
Technology only improves when a lot of people work very hard to make it better. The difference between speed and velocity is important in this scenario. Speed, the distance traveled over time, may not get you anywhere as you may be going fast in circles. Velocity, on the other hand, is directional. With velocity you have intention and outcome as you travel in a particular direction over time.
Algorithms follow this rule as well: they have a velocity of change. Hundreds of them have evolved over time with complex progressions. One example is the GAN (Generative Adversarial Network) algorithm created over a decade ago and used to produce realistic synthetic image data.
Artificial Intelligence algorithms need to be trained, and we train them with data. Lots and lots of data. We tend to spend most of our time capturing, cleaning and organizing data to train our models, and very little time coding and training the algorithms. Back in the day when we didn’t have GPUs to train models, or specialized hardware for AI, or datasets large enough to train with, we leveraged GANs to generate images for our training sets. We could provide an image to a GAN, and the model would create a set of images that were similar to our original image yet different. The perspective could be shifted, or the lighting changed, or some of the objects in the image could be removed altogether. This was necessary in order to prevent overfitting our models, which happens when we train them with less-than-adequate quantities of data, making the model incapable of properly classifying new images.
Overtime it became clear that, while GANs could produce realistic synthetic image data, they could not handle sequential data very well. This deficiency led to the rise of GPT (Generative Pre-Trained Transformer) algorithms, or Transformers for short.
Transformer algorithms excel at handling sequential data, such as text and speech generation, text summarization, language translation, understanding video content, DNA sequencing and video frames.
Just like new disruptive business models were created with the advancement of past algorithms, a new myriad of business models are being developed by hundreds of new startups with the advancements of Transformers across multiple disciplines. Given past experiences, some of these startups will disrupt industries and displace incumbents.
Even though we may be inundated with talk about Generative AI and Transformers, it is important to remember that we are still in the infancy of this technology, and this is only one Artificial Intelligence algorithm amongst hundreds that are currently in development by the AI community. Many of these newer ideas and technologies will come online in the next few years across multiple industries and domains.
The Role of Algorithms in Artificial Intelligence
When we talk about Artificial Intelligence we are really talking about a set of algorithms packaged as models. Once trained with data, these models can take data as input and produce more data, or value, as output.
The goal of these models is to try to emulate human cognition and understanding; but currently we are far from accomplishing this objective. Artificial Intelligence algorithms today are still solving point solutions; they are only able to automate isolated tasks and processes. As much as we call them intelligent, they are not. They are just better at identifying and executing patterns much quicker than humans.
As fear grows with the future capabilities of Artificial Intelligence we have to remember that machines cannot reason. We can’t introduce cognition to machines simply because we still can’t define human cognition. One can’t code what one doesn’t understand. Neuroscientists still can’t quite define how fear, anxiety and love are triggered in our brains. The same goes for thoughts, memories and dreams. There is a huge gap between our current state of understanding of these biological processes and sentient Artificial Intelligence performing them autonomously.
This doesn’t mean that we won’t continue to see exponential Artificial Intelligence growth in the next few decades. We are, after all, in the nascent stage of the technology. Breakthroughs in mathematical modeling, availability of high-quality data sets, and improvements in hardware and computing power continue to help in the development of new opportunities for algorithms and trained models.
The Era Of The Algorithms
There is a fundamental unique opportunity in the advent of algorithms and their potential for advancement across many platforms and domains. From early disease detection to advanced climate modeling; personalized learning systems to personalized treatment plans; precision agriculture modeling to better waste management. This is an evolution of consistent incremental innovation over many decades. The ability to be consistent, regardless of the noise along the way is a skillset that mathematicians and Artificial Intelligence practitioners have mastered and will continue to practice over time. Progress through relentless consistency is the key, a year or a decade at a time. For all their sophistication, Artificial Intelligence systems are still pattern detectors. And nearly all human behavior follows predictable patterns.
As we continue to expand on innovative ways to capture and store data in safe and sustainable ways, and we continue to have historical data to feed and train our models, we will find new patterns to improve on and optimize. Things will become more interesting when we project these patterns into the future as, with on Wall Street, finding patterns is one thing but projection and prediction into the future is where the true value is found.
Newer models are expensive to run and at times churn out wrong, nonsensical and biased outcomes. But models are now also free. Anybody with a desire to learn can download an open-source GPT Transformer and train it on a desktop or the Cloud. Technology eventually becomes disruptive when it takes root at the bottom of the market, with the masses, and relentlessly moves up market eventually displacing established competitors. With exponential use, the models will continue to improve.
Just as we embed traditional algorithms in our applications, developers can now embed AI models in applications. These are models that can be trained with data that follows robust governance processes as defined by their companies, managing risk and compliance, following established data governance guidelines. The scenario of the single large all-powerful Transformer that everybody uses, capable of answering all possible questions, is not sustainable in the long run.
One thing is clear: with the rare opportunity for companies to rapidly evolve instead of succumbing to new business models, it is important that CEOs and Board Members become comfortable in this space, not only by embracing this technology but also by mastering it.

