For 80 years, many scientists across different fields have extensively researched the idea of creating machines that would not just understand people but go one step further and outsmart them. This journey—which started in 1943––has seen ups and downs, breakthroughs, and failures all the way to November 2022, which witnessed the emergence of a conversational artificial intelligence (AI) model—popularly known as ChatGPT (generative pre-trained transformer). An AI-based chatbot technology with the ability to process natural human language and respond to a request or a question by generating a response.
This newly emerging technology has taken the global markets by storm, given the growing, unlimited, and diverse prospects it can offer. Over the last few months, some 30+ AI platforms have used such technology, including but not limited to Microsoft’s Bing, Google’s Bard, and other emerging content-creating tools. It is important to note that generative AI did not start with ChatGPT, unlike what many think. In fact, the first generative AI was created in the 1960s and was known as Eliza. However, unlike ChatGPT, Eliza was never able to generate text and was only capable of producing limited responses based on the rules and regulations it was pre-programmed with.
Let us go back in time and understand the evolution of artificial intelligence at a time when innovative technologies are taking center stage in virtually everything we do. For starters, what is AI? Why did it become overnight headline news? What does the acronym stand for? Is it artificial intelligence or augmented intelligence? In reality, it does not matter. Regardless of what the acronym stands for, what really matters is what AI can offer and how it changes human thought, understanding, knowledge, perception, behavior, and, to an extent, reality.
This is the first in a series of articles to be published over the next few months that will address this increasingly exciting topic that most people—regardless of their background, profession, and interest––have been discussing for years but has drastically accelerated since ChatGPT emerged. The acceleration in the adoption of these AI platforms––also referred to as generative AI––as opposed to traditional AI is due to the growing prospects and possible implications on various industries, economies, societies, and even what it means to be human.
So, what is the difference between traditional and generative AI? How can we distinguish between the two, and is generative AI attracting much more attention from businesses, governments, industries, and academia? If so, why?
On the one hand, in traditional AI, the algorithms process data and return results, including detecting patterns, formulating predictions, generating insights, and providing analyses. They require rigorous data as well as clear and specific rules and processes to be able to produce the desired results. On the other hand, in generative AI, the algorithms can create new synthesized content—often unstructured––such as audio, text, images, and videos, based on the integrity, quality, and training of large data sets available. Accordingly, generative AI is more creative because it offers much more added value, empowers users, and, more impressively, is easy to use––one can start asking questions and will seamlessly and rapidly generate responses. However, generative AI is more challenging and riskier to develop and adopt. Therefore, balancing the values created and the risks encountered while adopting AI is imperative.
My gut feeling tells me that the differences are way more than that, and the potential is limitless, but we still need to learn the boundaries and where this technology is taking us. For now, let us leave it at that and consider this article as AI 101 with more focus on traditional AI, leaving generative AI for future editions of the NileView, given its massive prospects to accelerate innovation, creativity, and discovery.
From the outset, AI reflected the notion of getting machines to think like humans. In fact, optimists argued that it could go up one more notch by doing many tasks that humans perform faster and, more importantly, better. The objective is to use AI for the betterment of society. The idea of having computing devices think and operate as humans came to life during World War II, with the emergence of fields such as computing and neuroscience. The ultimate objective was to create an intelligent and innovative computing machine that could be as close as it could be to human behavior and—in a way—store knowledge so that anyone at any point in time could ask questions and get answers. As a result, the world witnessed the production of the early versions of robots that had the potential to fool anyone into thinking that they were talking to a human and not an intelligent machine––widely known today as robotics.
In the 1950s, with the evolving narrative of intelligent machines and their possible implications on the future of humanity, novelists started to focus on science fiction, exploring the opportunities that could be created through these machines and their impact on society. Many of these novels were turned into movies, and although the first science fiction movie––A Trip to the Moon––was released in 1902, the 1950s are frequently considered the actual start of the golden age of science fiction movies that were primarily based on what advanced tech-based machines can offer. To this day, these movies document in multiple creative ways the journey of humans versus machines, their imaginative scripts, and technological prospects of how intelligent machines powered by timely and relevant data affect various aspects of our daily lives. The irony is that we have known these movies for decades as science fiction, except today, there is nothing fiction about it, right?
Since the 1950s, the debate has focused on whether the approach to developing AI should be top-down or bottom-up. In other words, whether to use a top-down approach in pre-programming computers with the already known set processes, rules, and regulations that in many ways govern human behavior when addressing a particular issue or use a bottom-up approach like neural networks that simulate brain cells as well as learn and more importantly adapt––in a transformative way––new behaviors. For many years and despite the millions—if not billions––of dollars spent on AI funding, the extensive research conducted, and the hype about the AI prospects, not much had been achieved, and the progress remained confined to the automation of some simple tasks and even common-sense reasoning and aspects such as facial and speed recognition—which is very much common today––amongst other features were beyond AI’s reach.
Only in the 1980s did a breakthrough appear on the horizon with potential solutions for business development and the commercialization of AI. However, the evolving solutions were mainly limited to structured tasks programmed with specific rules for a particular problem/issue. During those days, they were called expert systems. With time, these systems proved to be commercially viable. However, they still had limitations that prevented them from coming close to what the initial idea and aspirations of AI were all about.
Until the late 1980s and early 1990s, the overwhelming approach was top-down, with pre-programming computers with the rules and regulations associated with intelligent behavior. However, in the 1990s, there was a shift to the bottom-up approach mentioned earlier, which was influenced by the growing interest and research in neural networks. The back and forth between which approach to use to realize the maximum possible breakthrough that can match the aspirations and prospects of AI continued well into the 1990s. The question that poses itself is whether 1997 was the year that AI really came to life when the then recently invented supercomputer––Deep Blue, developed by IBM––won the battle against Garry Kasparov—the former World Chess Champion in chess. Maybe yes! However, today, many of the AI applications widely used are known to use a bottom-up approach based on deep learning (DL) and machine learning (ML).
This was the moment a machine using a top-down approach could go beyond the highly sophisticated pre-set rules and regulations to solve a problem and think strategically. Such development was followed by the production of a series of robots that started to show in homes that could offer various services and conduct a wide set of specialized tasks, leading to further commercialization of products; some became necessary gadgets at home. During that time, the world started hearing the notion of Smart Homes. It is essential to understand that AI is neither a product nor a new industry unfolding; it is a creative enabler capable of transforming many businesses, industries, and facets of human life.
Today, even for the ordinary adopters of technology such as mobile, desktop, and tablet users, artificial intelligence is transforming their lives through digital voice assistants such as Siri and Alexa, unlocking computing and mobile devices as well as applications using facial recognition and even auto-completing sentences when writing emails, word documents and others. It is worth noting that they often make mistakes, but as with any other technology, they are still work-in-progress, and the next editions will see improved features since AI––by design––is built to learn. AI, in its different forms and shapes, is and will continue to evolve and change many aspects of people, organizations, and societies––both positively and negatively.
AI can improve the quality of life by doing routine and sophisticated tasks equally or arguably better than humans, making their lives more efficient. While AI is a platform to access an unlimited knowledge repository, learn new skills, and communicate with others, it can also threaten people’s privacy, security, and well-being.
AI can enhance productivity and efficiency through automated processes and optimizing resources. It can help organizations create new products, services, and business models. However, AI can disrupt markets, industries, and the future of work by eliminating jobs and creating new ones while challenging organizations regarding governance and ethics.
AI can help societies address climate, healthcare, education, and poverty challenges and create a more sustainable and smarter future. However, AI can widen the divides between and within societies by exacerbating inequality.
Whether it is artificial or augmented intelligence, traditional or generative, I firmly believe that human reasoning, creativity, and intelligence remain invaluable. There is no question that AI can potentially transform the entire human experience as we know it. It is still early days, and there is a lot to expect in the months and years to come. However, the value added through human intervention remains significant in the numerous cases where AI platforms such as chatbots and others remain to this day limited.
We live in the age of AI and witness daily how it impacts our lives and livelihoods. The irony is that even with the current status of generative AI, we are still scratching the surface, and the extent and magnitude of their business and socioeconomic impact are yet to be fully understood, including the added values that could be created and the risks and uncertainties to be encountered. This whole space of the new version of generative AI came to life less than 9 months ago, so they are still early in their development lifecycle. In the world of innovative technologies, this is like version 1.0.
Today, there are way more questions than answers about AI. For example, how much do we know about it? How do we prepare for it? What are the talents and skills required? How do we drive it? Do we understand the boundaries of its positives and negatives? Do we know the extent of its opportunities and challenges? What are the financial implications of adopting AI? These are only some key questions and issues that need to be addressed. One thing is for sure: in the age of AI, we, as individuals, organizations, and societies, need to think differently, plan differently, and act differently.
As we think of AI as a society while looking into the future, it will be more rewarding to focus on how AI can enhance our daily lives by augmenting our knowledge, improving, and accelerating our performance and contribution to our profession, and helping explore more opportunities rather than be limited to the idea that advanced machines will replace humans. In my view, this will not happen, but AI will offer us the ability to be more efficient, adaptive, empowered, and focused or more creative endeavors.
About the author: Sherif Kamel is a Professor of Management and Dean of the School of Business at The American University in Cairo.
31 July 2023
Issue #32