Artificial Intelligence-Revolutionizing the Way We Do Things

By Alem Haddush Fitwi (PhD)

Extensively studied and recorded by a myriad of anthropologists, archaeologists, palaeontologists, historians and other scholars, humans have gone through many technological advancements over thousands of years – arguably to improve their living conditions.

People might, however, have differing opinions regarding the motivations for technological ideations and advancements. For humans, as is often said, there is no accounting for taste; but many of the various arguments made regarding technological inventions and advancements contain some grains of truth that are hard to ignore completely.

Many attribute them to addressing societal and communal problems. Put another way, they firmly believe that technological inventions are made with the benign motives of extricating the inventors and their fellow humans from the quagmire of darkness and poverty and protecting the human race from nature’s wrath by making the world a better place to live. They tend to be convinced that technological products can at least help people mitigate the effects of such dangers as diseases, epidemics, pandemics, locust plagues, bug infestations, or natural disasters in as many thousand little ways.

Others attribute technological creations to human indolence, contending that many inventions were made due to lazy mindsets. That is to say, often, technological progress stems from the idea of evading arduous, tedious, or time-consuming tasks. Laziness might push some to try out new solutions that make everything more automated and efficient, leaving them with plenty of time to sleep on their sofas. Eventually, the invention of the indolent gets adopted by more people and becomes a norm.

Yet others ascribe inventions to power hunger, avarice and sinister intents of human-beings. Many believe that most, if not all, of the technological progress, is driven by the morbid obsession of people for power.

They argue that a very community-based world where people are kind and understanding, ready to help out each other, without expecting anything in return, would never have promoted technological advancements at the same rate we have witnessed because of wars. They believe that most of the scientific and technological advancements in recent decades have been driven by an arms race for better weaponry that gives one better control over others. They often argue that there are no better forces that would motivate human-beings to develop technology faster than the desire to kill each other, capture resources, or grab land that belongs to others.

Furthermore – though it might sound like a luxurious activity to some, and in direct contradiction to the common English maxim “curiosity killed the cat” – a sense of adventure to discover the other side of the world is also considered to have played a major role in technological advancements. To rephrase the saying: Curiosity about what is on the other side of the treacherous, ungovernable, and unfathomable oceans inspired people to invent boats. This has given way to the construction of the state-of-the-art gigantic ships longer than a soccer court. Similarly, the burning desire to discover what is on the other side of mountains and impenetrable jungles pushed people to invent countless useful devices.

Likewise, many scientists have spent their lives working to understand how the human brain processes information, and, in recent decades, created artificial intelligence based on it.

Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) models (with input, hidden, and output neurons) constructions are typically inspired by how the trillions of neurons – electrically conducting cells in our brain that give us the incredible computing power, memory, and ability to think that makes us humans unique – work. Neurons have such special components like dendrites that transfer information to the cells body from other neurons, rather like a receiver (Rx),  axons that take information away from the cell body, like a transmitter (Tx), and a synapse that helps information flow from one neuron to another, somewhat like a communication medium.

However, it is also common to encounter people who are completely averse to technology, live in a wishfully mythical or non-existent world, and wistfully say, “Gone are the good old days, when everything was simpler and better.”

Some tend to view all kinds of technological ideation and invention as satanic and ungodly acts, as a result of which they romanticize what they see as ‘the primitive lifestyles of the past’, vehemently objurgating the present, and condemning technological creators, spreaders, enablers, helpers, and users to make them look like anathemas.

Generally, many consider technological advancement both a blessing and a curse. Personally, I am of the opinion that technology gives us the sixth sense to better fathom the work of nature and our surroundings, to create a world more friendly to humans in many senses. Its upsides usually outweigh its downsides.

Of course, it always depends – arguments in favor of nuclear technology; say, that it must be considered ‘one of the engineering marvels of the 20th century’, and has made the world a better, safer place; that is something I would take with a huge pinch of salt.

From a historic point of view, humans have gone through three major waves of changes to date: Agricultural Revolution (10000B.C.), Industrial Revolution (1750s to 1950s), and Information Revolution (1950s to present), and the role of technological advancements in these waves was irreplaceable.

Taking a deeper, brief look at the technological periodization, humans have seen a lot of technological advancements during the Prehistoric, Ancient, Middle, and Modern Ages; the vast majority of them occurred with alarming speed and rate during the Modern Age, though. History tells us that it all began with the Homo Habilis who managed to invent choppers, sharpened stone tools, by smashing one stone against another during the New Stone Age. The progress by then is believed to have been extremely slow, and it is said to have taken about three stone ages (the Old, Middle, and New Stone Ages), or millions of years, to create such basic tools. These inventions led to the Copper Age, when humans started smelting and manipulating coppers to make tools. This, in turn, laid the foundations for the next technological period known as Ancient History, dominated by the Bronze and Iron Ages, when humans began using bronze and iron tools.

The technological advancements during the Modern Age – divided into various sub-blocks of time like the Industrial Age, the Machine Age, the Atomic/Nuclear Age, the Space Age, or the Information Age – are unfathomably astounding. Electricity, certainly one of the most ingenious inventions of the 19th century (initially conceived in the middle of the 18th century; Benjamin Franklin is considered by some to be the inventor of electricity, with his famous kite-flying experiments in 1752), laid the foundations for the invention and development of computers in the 1940s and 1950s, which gave way to the Information Age.

Another ingenious invention, first brought into being by the US Department of Defense (DoD) in the 1970s, is the Internet. Now, thanks to it, the world is wired; metaphorically speaking, it has narrowed to a village. The various social networks and other services that run on the Internet have empowered people to build communities, irrespective of geographical locations, and brought the world closer together. The Internet has become one of the basic human necessities, affecting how we communicate and do business.

Disciplines such as Engineering, Computer Science, Biology, Psychology, Linguistics, and Mathematics constitute the foundation of Artificial Intelligence (AI), a science and technology. The development of computer functions associated with human intelligence was the main thrust of AI. That is, the advent of computers and various computing paradigms (edge, fog, or cloud computing) inspired by the Internet has given way to the advancement of Artificial Intelligence.

From the point of view of computing, the human brain mainly offers three functionalities: computing power, memory, and the ability to think. Then AI was brought into being with the notion of leveraging computers and machines to mimic the problem-solving and decision-making capabilities of the human mind; it, nevertheless, still relies on data and specific instructions fed into the models.

The earliest successful AI program was written in 1951 by Christopher Strachey, one of the computer pioneers who served as the director of the Programming Research Group at the University of Oxford. Later, John McCarthy coined the term Artificial Intelligence and demonstrated the first running AI program at Carnegie, Mellon University.

Unlike other computing fields, AI progress had been relatively quiet until the 1980s. In the 1980s, Geoffrey Linton, dubbed the ‘Godfather of Artificial Neural Networks’ (ANN), carried out a lot of promising research and published many papers. A decade later, Yann Lecun – a student of Geoffrey and often nicknamed the grandfather of Convolutional Neural Networks (CNN) – took AI to another level in the 1990s. He made a great contribution to the conception and advancement of CNN. Now both of them are members of the active drivers of the deep learning in their respective companies.

CNN is an extremely intriguing area and, at the same time, very challenging to deal with. It is too involved, convolved, and mathy one that takes a great deal of computational powers.  Hence, mainly due to the lack of sufficient computational powers, it again went through another quiet period – until the advent of the Imagenet in 2012 that has revolutionized the development of CNN. Since then, a number of more accurate convoluted networks like VGG-Net, Res-Net, and many others have been developed based on deeply involved network architectures. These types of networks are convenient for cloud computing or server-based deployment.

Starting in 2015, the trend changed to developing lightweight CNN models that can fit into mobile and other edge computing environments. The advancement of graphical processing units (GPU) has played a paramount role in the development of AI, in that GPUs have evolved from a single-core, fixed-function hardware which was exclusively employed for graphics purposes to a set of programmable parallel cores capable of handling the most complex deep learning calculations.

Today, AI has become ubiquitous. It can give us recommendations about the future pathways we should pursue to navigate our companies in the right direction based on insights derived from huge historical data, it can enable us to make informed decisions, it can provide us with directions while driving, it can provide answers to our questions, it can automate tasks and communications in the workplace through the use of chat-bots.

It was during the last decade that AI has evolved into technology like facial recognition and autonomous cars. But it has quickly become tangibly accessible, far beyond theory and academia. It is now applied in ways that can help people a lot. Some of the most fascinating regenerative AI tools in use today are ChatGTP, Grammarly, Voice.ai, Repurpose.io, Synthesia, Jenni AI, Andi, Cowriter, Skim It, Prompt Hunt,  and All Search AI. 

To sum up, as of 2022, we have embarked on a new era, the AI era. I am 100% in agreement with what Mark Cuban, an American entrepreneur and television personality, had to say in 2017 about the role AI would play in the workplace. “AI, deep learning, machine learning — whatever you’re doing if you don’t understand it — learn it. Because otherwise, you’re going to be a dinosaur within three years.”

Buzzwords like AI, ML or DL are, at best, confusing, and unfathomable at worst. Many find it hard to differentiate between them. Here, I would like to concisely shed some light on their similarities and differences. Using mathematical set notation to elucidate their relationships, AI is the superset, ML is its subset, and DL is the subset of ML.

In a broader sense, AI can be defined as the ability of an electronic machine to mimic intelligent human behavior. Intelligence, an intangible human trait, comprises five elements: reasoning, learning, problem-solving, perception, and linguistic intelligence.

Accordingly, AI was intended to have all these attributes; it is not there yet, though. ML is a specific application of AI that allows a hardware and software system to learn and improve from experience automatically.

Similarly, as precisely defined by Tom Mitchell, ML is a computer program capable of learning from experience (E), with respect to some set/class of tasks (T) and performance measure (P), the performance P of an ML model at tasks T improves with experience E.

Based on its purposes and how its model is trained and created, it is commonly divided into categories like Supervised, Unsupervised, Semi-supervised, and Reinforcement Learning. DL is an application of ML that uses highly involved or convoluted algorithms and deep neural networks to train a model.

Today, AI research is one of the most exciting fields in the tech world. The ongoing AI developments are divided into four categories: reactive machines, limited memory, theory of mind, and self-aware AI.

A Reactive Machine is the very basic type of AI capable of responding to external stimuli in real-time but lacks the ability to store information for future use. A Limited Memory AI is one that can store knowledge and use it to learn and train for future tasks. It uses recently garnered data to make immediate decisions, like in self-driving cars.

Theory of Mind refers to the concept of an AI capable of sensing and responding to human emotions, and performing tasks of limited memory machines. Lastly, a Self-aware AI is the ultimate stage of AI, where machines will have human-level-intelligence, be able to have a self-sense and recognize the emotions of others.

To date, as far as I can tell, humans have managed to build Reactive Machines and Limited Memory AI based models; however, they are “millions of years” away from building self-aware AI models. I do not have any doubt that AI models will play very pronounced roles in automating workplaces and will be able to replace the roles of many white-collar workers in the foreseeable future.  It is 100% possible to build models and computers capable of performing a quintillion of operations per second in a structured way. In this department, AI and computers are at least a quintillion times faster than humans, in the era of exaFLOPS (exa (10^18) Floating Point Operations per second)/exascale computing.

Additionally, AI can be very useful in areas as varied as helping doctors perform diagnosis easily and accurately, data analytics, data science, navigation systems, weather forecasting, military applications, smart mechanical surveillance systems, natural language processing, gaming, augmenting reality, creating virtual reality, operating unmanned aerial and ground vehicles or robotics in general, vision systems, or smart irrigation systems.

However, as I see it,  it is next to impossible to build AI models or machines capable of dealing with highly nuanced things like language and common sense. As stated earlier, AI can definitely beat humans in some departments, but AI certainly lacks common sense and fails in dealing with things that have nuances, or complex differences and unforeseen circumstances because the world we live in always presents endless unforeseeable challenges. In short, humans are light-years away from creating a self-aware AI; though it may be possible to create a human-like AI with metamemory skills.

However, unless used under the control of responsible individuals or governments, there are countless good reasons to believe that AI is, at the very least, a double-edged sword. It has the power to positively shape humanity’s future across nearly all industries, but equally, it could also have catastrophic consequences in infinitely many ways if misused. To corroborate this line of argument, let us delve into its surveillance and military applications, and the question of fairness. Beginning with the role of AI in the practice of video surveillance systems (VSS): Currently, there are well over a billion closed-circuit television (CCTV) cameras in use worldwide.

Wherever you go, big brother is watching you! The billions of CCTV cameras mounted on walls, ceilings and corners, or perched on street poles, border checkpoints and lamp posts, enable governments to collect a great deal of information about individuals, without their knowledge or consent, with flagrant disregard to their privacy rights. 

Universally, privacy is a fundamental human right, upon which other rights are built. It is defined as the state of being free from any disturbance or observance without one’s consent and knowledge. It is designed to help individuals establish boundaries to determine who can have access to their personal information. Furthermore, it is defined as the protection of personal information that says anything about who we are, what we do, what we think, and what we believe in. It is often argued that only individuals who have something to hide will care about their privacy; the ‘nothing to hide, nothing to fear’ quote is often invoked in this context.

On the other hand, many argue that it is never just about having ‘something to hide’, but, rather, about an individual’s privacy and the kind of society we want to live in. The quote, “I don’t have anything to hide, but I don’t have anything I feel like showing to you, either” is used in an effort to corroborate this position.

To the utter dismay of political opponents and civilians, AI has added a lot of capabilities to CCTV cameras, enabling governments to collect more specific information about targeted individuals and perform tracking around the clock. Today, extremely powerful CCTV cameras are out there, equipped with powerful facial recognition AI models, tracking systems, and the ability to link individuals with their databases.

The AI models help the cameras to effectively perform video frame inspection, identification, recognition, classification, observation, detection, and monitoring on the spot, based on the edge-computing paradigm or offline-based, on a remote cloud computing scheme. Despotic governments can make use of AI-powered cameras to flagrantly suppress and attack political opponents, dissidents, or critical citizens. They can always watch you from the surveillance operation centre (SoC)!

As we know, AI can react significantly faster than systems that rely on human input/intervention, and can drastically augment the process of military target acquisition. AI models can help in the detection and identification of the location of a target in sufficient detail to allow the effective use of lethal and/or non-lethal means/weapons.

Turning to the role AI can play militarily; it is currently instrumental in enhancing the surveillance and target acquisition capabilities of manned and unmanned ground and aerial vehicles (MGV, UGV, MAV, and UAV). If such technologies fall into the hands of irresponsible governments or individuals, they can quickly unleash disasters.

For instance, drones can appear out of nowhere and cause catastrophes. The world has witnessed how deadly drone warfare can get in the Nagorno-Karabakh War, where a lot of military equipment and personnel was very quickly obliterated, or in the Tigray war in Ethiopia, where millions of people were massacred in the most cruel manner.

Moreover, the world is now in a hypersonic missile race, and the role of AI is extremely worrying. At present, many countries possess thousands of inter-continental ballistic missiles (ICBM) – that can carry both conventional and nuclear warheads – capable of traveling at a speed of mach 23 (28,200 kilometers per hour) or higher; however, they are vulnerable in that they travel via predictable trajectories. That is to say, they can be detected and tracked by missile defense systems, and might be intercepted either in their boost phase, midcourse phase, or terminal phase. As a result, countries are in a race of owning a new class of missiles called hypersonic glides which can travel through unpredictable trajectories to evade air-defense systems and successfully reach their targets.

Here, AI can be used for perpetrating an attack with these missiles and countering such an attack. The AI can give these missiles the power to follow random or unpredictable paths, rendering existing missile defense systems useless.  It can be instrumental in making hypersonic missiles smarter to follow erratic paths, and enhance their invisibility by forming plasma waves that absorb radar signals to protect them from radar detection.

On the other hand, it can also be employed to accelerate the complete kill chain from detection to destruction of hypersonic missiles. AI will probably be the only feasible technology that can give militaries the capability to better defend against hypersonic weapons, which travel at an incredibly high speed via erratic trajectories. It does not take a genius to envisage how devastating it would turn out when such consequential AI-powered technology comes into the possession of irresponsible parties.

In relation to AI and fairness, there is a pervasive misconception and misunderstanding in that many people mistakenly believe that AI works fair and square. This notion is absolutely wrong.

AI can only get as fair as you make it. AI can potentially suffer from three types of biases: algorithmic AI bias, training data bias, and societal AI bias. AI algorithms might, for instance, be designed to systematically manipulate the weights of some parameters differently, depending on people’s ethnicity, history, origin, belief, or location, which creates unfair outcomes. Putting it another way, the algorithm can only be as fair as the builder wants it to be.

Data bias is introduced when less diversified data is employed to train an AI model. In a scenario that produces desirable outcomes, sufficiently large data for a certain group or situation, and less data for another group or situation, might be employed for creating an AI model. In this case, the group or situation with more of the dataset might get an unfairly favorable outcome compared to the underrepresented group or situation.

Conversely, for instance, for a model that detects criminal behavioral pattern, more data on a certain group of people might be employed in building the AI model, and so it might unfairly target that group of people. Moreover, some societal or communal assumptions and norms have the potential to cause AI model creators to have some blind spots or expectations in their thinking, which they might potentially carry and build into the AI models.

In parallel to the AI age – a sub-block of the Information Age – other fascinating technologies are on the rise. The Experience Age is dawning, thanks to the combination of technological advancements in mobile connectivity, the Internet of Things (IoT), artificial intelligence, chatbots, social media, and messaging apps. Now we are on the cusp of the next great technological period: the Augmented Age, dominated by virtual reality (VR) and augmented reality (AR).

The ARVR Community Vision is moving from gaming to everyday life in three major steps: At first, VR is mostly perceived as a gaming and entertainment tool during the period between 2015 and 2020. Secondly, from 2020-2025, VR & AR are expected to merge, and VR will grow to become a more promising professional platform. 

Eventually, from 2025-2030, advanced VR sets are expected to be created, touted to be capable of reading our minds and helping us get our work done on the fly in any situation. It is projected that it would be hard to live without them. AR is not the same as VR in terms of the technology applied and devices employed. VR is designed to create an alternative reality that a user can experience without input from the real world, and the whole experience is digitally created and controlled.

In contrast, AR is designed to incorporate digitally-created 3D objects and 2D imagery into real life, where the user remains present in the physical world. It just adds digital content into a physical world with no change in its overall aspects. AR Smart glasses are already in the market. They incorporate AR technologies into a wearable device that allows users hands-free access to their laptops, mobiles or the Internet. Users can easily keep track of current information on the spot without interrupting their work by using voice control, eye movement, or simple button-presses on buttons built-into the frame of the smart glasses.

It will not be long before prompters are fully superseded by AR smart glasses; users can make presentations right from their smart glasses and open apps through eye blinks and other controlled movements. It will, also, likely not be too long before we have augmented driving and safety features in cars that would enable us to have a 360-degree view of the car (which is already partly realized now).
In view of these developments, I personally am passionate about AI and AR. I strongly advocate responsible engagement in these areas and endeavors, to positively and fairly impact societies and the world.

To date, tremendous successes have been achieved in imparting data, information, and human intelligence to electronic machines that can at least narrowly mimic human behavior and perform tasks by learning and problem-solving. To put it another way, AI still glaringly lacks common sense and the ability to understand nuances. Besides, it is possible that humans might not actually be able to build self-aware AI systems or machines.

However, the fact that we are at a point in time where AI models are already capable of simulating natural intelligence to solve complex and highly convoluted problems is a remarkable achievement. Now, the sad reality is that, in many societies, some are not yet ready to embrace these most ingenious and fascinating technologies. From the standpoint of technological periodization, many people around the world are still between the Stone and Iron Ages.

Obviously, there has been an astronomical gap between the economic haves and have-nots, and it is bound to get far worse due to the alarmingly growing technological divides. What all the harbingers show at this point in time is that there is a good chance that countries with less data-savvy and tech-savvy populations will likely find it hard to survive in many ways, because AI will surely continue to change the way we do things, the way we communicate, the way we do business, …, and the way countries wage wars.

What former US President Bill Clinton once said regarding the technological divide is still absolutely true:

“It is dangerously destabilizing to have half of the world on the cutting edge of technology while the other half struggles on the bare edge of survival.”

Hence, instead of engaging in frivolous acts, gossiping, and frittering away one’s time on preaching myths or anachronistic ideas, it is more prudent to recognize that AI is already having an impact on societies, understand where AI flourishes, and devise a viable strategy to survive in its huge presence.

________________________________

About the author:

Alem Haddush Fitwi is a researcher with a PhD in Electrical and Computer Engineering. The ideas expressed here represent his personal views exclusively.