Artificial Intelligence (AI)

“Artificial Intelligence is the electricity of the 21st century.” Andrew Ng, Stanford University

“We want Google to be the third half of your brain.” Sergey Brin, Alphabet President

Introduction

The revolution is upon us. The rapid growth of artificial intelligence, blockchain, cryptocurrencies, and quantum computing will deliver lasting and irreversible change to the global markets. These technologies not only affect existing products and services and give rise to new innovations, but also – and arguably more importantly – have the power to fundamentally alter the way our institutions function. Enterprises, military, and schools will have to navigate and adapt to the changing landscape to grow and remain competitive. Jobs face their own set of opportunities and challenges. We believe certain conditions exist today that will drive the the accelerated development and use of these technologies. And in an environment poised for further growth, we expect to see enormous change.

To explore the potential impact of these major disrupting technologies we have developed a series of articles on each, beginning with artificial intelligence (AI). AI is a broad term that includes machine learning, deep learning, ensemble techniques, simulations and optimization techniques, and natural-language processing. Though still in its infancy, AI is imbedded in every other technology we will examine, making it both an undeniable opportunity as well as a source of unknown, and potentially concerning consequences.

Fueling AI’s Exponential Growth

We believe there are several key conditions in the current technological market fueling the growth in AI and advanced machine learning, making them top strategic technology trends. While more technical advancements have driven down costs and barriers to entry for many players, the demand for AI-driven products and services has boosted competition among companies seeking to become leaders in the sector.

At its core, AI is driven by information. Today, we have witnessed an explosion of available data from a growing number of sources, such as sensors and other edge computing devices. This means that machine-learning technology can now access the essential data to support its algorithms, turning data into information. In addition, significant increases in computing power, advances in system architectures, greater in-memory storage, and more powerful and efficient chipsets in highly scalable cloud-based architectures have provided the capacity for further growth. These developments have removed many of the infrastructure implementation inhibitors for large organizations, making machine-learning solutions vastly more powerful, affordable, and attractive.

These technologies are also attracting talent, promoting further investment. For example, machine-learning software development tools, which were once prohibitively expensive and complex, have become relatively inexpensive or free. As a result, hundreds of thousands of data scientists and students are working in data science-related disciplines.

The focus on AI development is reflected in significant expenditure increases. The Wall Street Journal reports that spending on AI is expected to grow exponentially from $8 billion in 2016 to $47 billion by just 2020. Across the private sector, we see companies at the forefront of AI development capitalizing on strategic opportunities, including partnerships with service providers across all industries. Semiconductor vendors like NVIDIA, in partnership with strategic software developers with massive research and development budgets such as Google and Sony, ensure a future where AI is ubiquitous and essential.

For example, AI and deep neural networks (DNNs) (algorithms whose architecture resembles neural system of the human brain) present a significant opportunity for manufacturers of high-performance chips. Companies such as NVIDIA with its V100 and Intel with its Xenon microchips are now supplying the engines running specialized chores associated with AI. To illustrate the use of applied DNN and machine learning in consumer electronics we can use the example of a robotic dog (AIBO) made by Sony. AIBO can acquire basic skills, react to verbal commands, and “learn” new skills while altering its behavior to elicit a pat from its owner.

Chart shows increasing revenue from artificial intelligence in world markets beginning in 2016 and rising to over $35 billion by 2025.

Source: Tractica

Gartner, a prominent technology consulting firm, predicts that by 2020, DNNs and other machine learning applications will represent a $10 billion market opportunity for semiconductor vendors. By 2021, Gartner believes that more than 50% of all new semiconductor microprocessor designs will include embedded neural network processing capabilities, up from 0% in 2016.

Rapid growth in the demand for advanced machine learning technologies is fueled by the added horsepower to both existing and future technology applications. AI software developers and semiconductor vendors have created chips capable of computing thousands of algorithms embedded in hundreds of layers throughout the AI program.

The increase in the power of microprocessors has thus far allowed the development of visionary technologies such as IBM’s Watson. Watson claims in a 2015 TV commercial featuring Bob Dylan that it can read 800 million pages a second. Some analysts predict AI will account for 50% of IBM’s revenue before the end of the decade. Like IBM, Google (Alphabet), Facebook, Amazon, Twitter, Yahoo, Intel, Dropbox, LinkedIn, and Pinterest are all making enormous investments in AI.

The public sector is also making sizeable investments in AI. “The Canadian Government announced that it would dedicate $125 million toward developing AI in Montreal, Toronto, and Edmonton—cities with top universities,” as reported by the Wall Street Journal in March 2017. This kind of investment has led to fierce competition for talent. Google (Alphabet) recently hired the director of Stanford University’s AI lab to lead a new AI unit. Researchers are warning that major technology companies are draining universities of scientists responsible for training the next generation of researchers. In fact, the share of new PhDs in computer science leaving to take industry jobs has risen from 38% to 57% in the last decade.

Notwithstanding all this potential, AI is not universally embraced. High profile futurists including Tesla founder Elon Musk and the late physicist Stephen Hawking believe humans could be supplanted by AI. Given enough time, many routine tasks, specialties, and human skills will become redundant and replaceable. Skills such as navigation, legal document preparation, and medical diagnosis appear to be some of the low-hanging fruit. Furthermore, questions have been raised about whether the power of sufficiently-advanced AI, known as general AI, could actually surpass our powers of perception and imagination. And concerns about the ability of a powerful algorithm to override parameters set by its creators are reminiscent of the age-old confrontation of man vs. machine. While once fodder for science fiction, both sides admit the AI genie won’t be returning to the bottle.

While Musk and Hawking predict AI will threaten our very existence, many others are more sanguine, assuring us that AI will be just another tool to improve our productivity—not unlike semiconductors and spreadsheet software. These technological advances and opportunities—and concerns—will have reverberations across industries and sectors. Along the way, to advance productivity and meet the demand for new tools AI will most certainly replace countless jobs but also create needs for new jobs, radically changing the professional landscape. To put it in perspective, many from the next generation will work alongside AI in jobs yet to be created.

The Architecture of AI

“A model is a decision framework in which the logic is derived by an algorithm from data, rather than explicitly programmed by a developer or implicitly conveyed via a person’s intuition. The output is a prediction on which a decision can be made. Once created a model can learn from its successes and failures with speed and sophistication that humans usually cannot match.” Steven A. Cohen, CEO of Point72 LP

AI has been theoretically possible for almost a century. However, given the massive number of computations required to perform it wasn’t practical until cloud computing, huge data sets, and graphics processing unit (GPU) accelerators entered the picture. Also called machine intelligence, in various ways AI imitates biological neural networks. As Scientific American explains, “deep learning refers to the simulation of networks of neurons that gradually “learn” to recognize images, understand speech, drive a car, or even make decisions.” It automates feature extraction from large amounts of data, requiring a tremendous amount of computing power. Researchers apply it in domains such as internet services, medicine, media, security, defense, and autonomous machines.

The architecture of deep learning begins with a series of conditional probability algorithms arranged into levels that are combined into stacks. Each level carries out its specific task until a goal is reached—one with the highest probability of accuracy. For example, the first layer may be an algorithm that might recognize general geometric shapes from an array of pixels. The algorithm selects a shape and sends it to the next layer where that algorithm matches it with a human facial feature. Certain geometric shapes more closely resemble human facial features. The algorithm or node reports a number that represents how confident it is that the geometric shape is likely a human facial feature. A close match means there is a high probability that the shape will be part of a face, raising the probability that the picture is a human face and not a dog.

From an array of features in the database, the feature with the highest probability is selected and passed to the next layer, which looks to match the feature to a human face. The following layer matches that face with a type of person, male or female, Asian or Caucasian, young or old, and then on to a specific person in the database. This iterative process repeats many times. Engineers can use this same strategy to recognize voices or even pictures of cancerous tissue.

Some AI structures have hundreds of layers. If the algorithm chose correctly, just as repetitive firing human neurons reinforce pathways, each success reinforces the path the system will use so that over many trials, a sort of default path is created as it avoids previously unsuccessful pathways to optimize the program—neurons that fire together, wire together. Massive amounts of structured data, pictures, or voices are forced through the system to parse out common characteristics as the program is rerun again and again until the default pathways are those that provide the results with the most frequent success and therefore the highest probability of being correct in the future. As the Wall Street Journal explained on August 2, 2017, “machine learning doesn’t yield exact answers, but it reduces uncertainty around risks. For example, AI makes mammograms more accurate, so doctors can better judge when to conduct invasive biopsies.”

Machine learning, a subset of AI, trains machines to train themselves. This involves an additional layer that seeks to rewrite its own algorithms to enhance its ability to learn. This is helpful when the data is unstructured. Google Now is one such project applying this technology. Gradual improvements in speech recognition by Siri and Google Assistant are the result of machine learning architecture. Algorithms resident in the nodes are sometimes referred to as neurons.

Source: Minz, Quora

Exploring the Growing Applications of AI

Digital Assistants, Entertainment, and Education 

Smartphones, computer tablets, and automobiles are evolving. “We can control their actions with simple gestures, body movements, even facial recognition; advanced lasers—specifically, diode lasers with 3D-sensing capabilities—are changing how we interact with technology every day. Virtual reality (VR), augmented reality (AR), and autonomous-driving vehicles are just a few examples of new applications on the horizon.” Lumentum

You would be hard pressed to find an innovative product or service new to the market that does not incorporate some degree of AI. Changing consumer behavior reflects just how much AI has permeated the market. Specifically, with regards to the proliferation of personal assistants and related devices in recent years, Gartner expects voice-enabled AI technology in the home to be a $281 billion business by 2020 as consumers add more and more connected devices. Take for example Echo, Amazon’s voice-controlled speaker answering to Alexa. Echo is an AI-enabled digital assistant. AI is what allows Echo to recognize human speech, obey requests, and respond with a natural sounding voice. Echo can help control the temperature of your home, answer the phone, make a flight reservation, and of course order items from Amazon. Amazon was a first mover in this area and cemented its place in the home alongside other appliances.

Following suit, Google and Apple have also established their respective digital assistants. Google is now rethinking all of its products from an AI perspective. Apple’s Siri is embedded in one of the world’s best-selling cell phones, but Apple was slow to establish Siri as a home appliance. Among other features, verbal commands to Siri enable users to link security or entertainment devices to wearable accessories.

Music streaming leader, Spotify, is acquiring companies in the AI space to improve its music search in order to better compete against Apple Music, Amazon Music, and streaming pioneer Pandora. Sonos, in partnership with Amazon’s Echo, has developed wireless speakers for every room that respond to voice commands. Sonos recently went public after its products became wildly popular with consumers.

In the education space, some scientific research will be outsourced to AI, speeding research and development and therefore saving time and money. For example, the Department of Energy’s SLAC National Accelerator Lab at Stanford is using AI to discover catalysts that speed up the reaction and creation of important chemicals, such as turning nitrogen into ammonia. Because of the enormous number of possible reaction pathways, researches have previously relied on intuition to narrow the choices – a process that could take years. With AI, machine learning replaces human intuition. Stanford is also using AI in the lab to comb through the millions of possible atomic arrangements in glass to find the structure that could make glass stronger than steel. In the education space, AI programs are even learning to teach and grade exams by comparing completed essay exams to those written by teachers, using the feedback to create improved teaching methods.

Healthcare and Genetics

Pharmaceutical companies dominate drug development in the West, but AI is changing that landscape, too, allowing smaller biotech firms to enter the game. By lowering the costs of research, AI is streamlining drug testing with models of human tissue in 3D, allowing tests of new drugs on virtual patients. With digitized data supplied by biometric sensors, digital databases are growing, supplying AI-based programs with the data required to accelerate the development of an outcomes-based healthcare system.

In addition to drug development and testing, AI is being pioneered in the realm of medical diagnostics, with IBM’s Watson publicized as a leading diagnostician. With access to the world’s digitized medical records, Watson can match symptoms with diseases in mere seconds with astounding accuracy. With nearly infinite memory, the feedback it receives from each trial makes it smarter. Watson is not autonomous—yet. It is designed to be a tool for medical professionals; a symbiotic relationship.

Let’s look at another example. The skin cancer disease, melanoma, strikes over 150,000 Americans a year, killing one person every hour. The tragedy is that in its early stage, melanoma is usually quite visible. IBM and Quest Diagnostics have teamed up to identify melanoma from pictures of lesions, cross-checking similarities with examples stored in an enormous database of known melanoma lesions. A special lens that is adaptable to smartphones is making it easy for patients to send pictures of possible lesions to health care providers.

The Wall Street Journal reports that Massachusetts General Hospital recently established a center in Boston that plans to use NVIDIA’s powerful chips to run an AI system to spot anomalies on CT scans or other medical images. The project draws on a database of 10 billion existing images designed to “train” systems to help doctors detect cancer, Alzheimer’s, and other diseases earlier and more accurately.

Advancements and partnerships driven by AI are even opening the door for non-traditional players to enter the medical space. Apple foresees an enormous opportunity in healthcare. The recent upgrade to the Apple Watch can now monitor the pattern of heartbeats and issue a warning if something is out of sync. Analyzing a large database of enzyme samples, Microsoft announced they have identified users on the internet who are suffering from pancreatic cancer, even before they have been diagnosed.

Beyond diagnostics, AI and big data (an enormous set of data historically too large to analyze using traditional data-processing tools) are streamlining the development of tools to correct genetic defects. In conjunction with genetic editing technologies like Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR), scientists have greatly improved the prospects for curing genetic diseases in our lifetime. The number of emerging biotech companies using CRISPR technology is only limited by the availability of talent.

The manipulation of human genes clearly raises ethical issues, such as whether experimental genetic drugs should be tried with terminally ill patients. Cures for genetic defects in humans will lag AI-aided plant and animal research. In fact, the genetic techniques and applications in agriculture have seen the most progress to date. Still, we can look forward to the combination of AI and big data resulting in lower energy and food costs, as well as the faster discovery of cheaper and more effective drugs.

The Future of Traditional and Cyber Warfare—A Warning

For many, the thought of autonomous weapons, long predicted by science fiction writers, evokes images of The Terminator and Skynet. While we are a long way from the Androids bent on exterminating the human race, semi-autonomous weapon systems have already been deployed. And fully-autonomous weapons are not far behind. Major powers will need to negotiate limits on autonomy or risk that a presumed attack spirals into war before humans can intercede. Weapons will be developed to rival those of perceived adversaries. As AI improves, so will the lethality of these weapons.

According to Paul Scharre, Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security, at least 30 nations have deployed “supervised” autonomous weapons designed to protect ships, vehicles, and bases. Once placed in automatic mode, they have the ability to target and fire before command. The example pictured below is the Samsung SGR-A1 robotic sentry South Korea has stationed on its border with North Korea near the demilitarized zone. Similarly, in a move towards unmanned combat vehicles, Russia recently publicized its plan to create a fully-autonomous version of their main battle tank shown below.

Samsung SGR-A1 Robot Sentry. Source: Ubergizmo

Russian T-14 Armata. Source: Quora

Another significant focus of AI-driven warfare advancements is drones. The world’s largest defense contractor, BAE Systems, is developing the next generation of unmanned combat drones, code-named Taranis. Operated by humans, Taranis is capable of loitering in a contested space, identifying a target, and destroying it. Other recent examples include Northrop Grumman’s X-47B and Israel Aerospace Industries’ fully-autonomous Harpy drone. Each drone can circle over contested areas in order to search for targets and strike without human supervision.

Taranis. Source: BAE Systems Air

Northrop Grumman X-47B Drone. Source: Axe, National Interest

The opening ceremonies for the PyeongChang 2018 Winter Olympics provided a peek at the next stage in warfare: cooperative autonomy. The ceremony involved a collection of robots acting in harmony as would a single organism. In this case, each drone communicated independently with the others in terms of where, when, and how to fly to a preprogrammed pattern, avoiding collisions—much like a flock of birds. As it relates to the military, swarming drones carrying armaments could be deployed in huge numbers with few human controllers. AI-enabled swarms could autonomously evade incoming threats by simply opening a hole in the formation, whereas a single heavier aircraft might find it impossible to evade. The drones could also overwhelm defenders, aircraft, ships, and ground personnel by sheer weight of numbers. Like South Korea, China and the U.S. Air Force have openly demonstrated this swarming capability, while the U.S. Navy is experimenting with swarming robotic boats.

Star Trek Beyond – Swarm Attack Scene. Source: Paramount

Seasoned combat pilots routinely perform the cycle of observation, orientation, decision, and action. AI-piloted unmanned aerial vehicles will do the same, but while drawing on the experiences of hundreds of human pilots in thousands of scenarios. Microseconds could spell the difference between life and death for a seasoned combat pilot or an unmanned aerial vehicle. Slower human reaction time could be disadvantageous in a given scenario.

AI is not just driving innovation in traditional warfare technologies—cyber warfare is a compelling topic in and of itself. Today, the world is in the equivalent of a cold war in cyberspace, with impacts permeating both government and civilian boundaries. While there are wide-ranging examples of cyber warfare, the most famous cyber weapon to date is Stuxnet, a malicious computer worm believed to be jointly built by the U.S. and Israel that attacked Iran’s nuclear facilities and destroyed centrifuges used to produce enriched uranium, thereby delaying the Iranian atomic bomb for years.

Malware, allegedly from Russian state-sponsored hackers, is now resident in the software that manages much of America’s utilities, water, and financial networks. Reports indicate Russian hackers could be using Ukraine, its former Soviet territory to the west, like a petri dish to test their cyber weapons by hacking into banks, successfully shutting down power grids, and altering election results.

China continues its massive cyber espionage and malware efforts against its advisories. In 2010, Chinese state-sponsored hackers broke into the U.S. Government Accountability Office via the unguarded computers in the Department of the Interior and downloaded more than 50,000 classified employee files. Bank accounts and social security numbers were stolen from the U.S. government again in 2013, this time possibly by hackers from Iran. North Korea has deployed hackers around the world to keep the isolated nation cut off from the world internet.

North Korea has been credited with, but denies responsibility for, the famous Sony Pictures hack crashing servers and disseminating confidential information in retaliation for the production and release of The Interview, a comedy about an assassination attempt on Kim Jong Un. Sony was forced to operate with pen and paper for weeks.

More ominous are the recent government-focused security breaches. Infamous hacks include the Russian hack into the U.S. Navy’s computers in 2011 and the 2015 hacking of the State Department and White House computers by the Russian hacker group known as Cozy Bear. Since 2016, it is now accepted that Russia collected damaging information to distribute in advance of the U.S. presidential election—and Russia is expected to be a cyber factor again in the 2018 midterm elections. It is logical to assume that AI cyber weapons will be more agile and stealthy in the coming years.

How Disruptive Will AI Be?

“This technology will be applied in pretty much every industry that has any kind of data—anything from genes to images to language.” Richard Socher, founder of MetaMind, an AI startup recently acquired by Salesforce, a cloud-computing giant.

The expanding and evolving presence of AI ensures that many, if not all, institutions will be required to adapt or risk becoming obsolete. In fact, many tech companies are already using AI or have plans to use AI in the near term.

So what are the implications? The future of jobs and the structures of our institutions hang in the balance.

Spiceworks, Inc. Survey Results, October 2016. Source: The Wall Street Journal

Dr. Louis-Philippe Morency at Carnegie Mellon University is teaching machines the subtle art of analyzing human facial expressions, with the ultimate goal of using machines to detect and diagnose psychiatric diseases. In the future will fewer psychiatrists be needed? If AI is that clever at recognizing a specific human face, it’s likely to be even better at inspecting standardized manufactured items such as jet engine blades. Will fewer inspectors be needed? Financial services and payment processing companies are using DNN techniques to accurately detect fraudulent transactions, insider trading, and market sentiment analysis. Will fewer analysts be needed?

Worries that AI may ultimately replace many jobs are not new and have instead taken on added urgency. A widely cited study by Carl Benedikt Frey and Michael Osborne of the University of Oxford, published in 2013, found that 47% of jobs in America were at high risk of being “substituted by computer capital” soon. More recently, Bank of America Merrill Lynch predicted that, by 2025, the “annual creative disruption impact” from AI could amount to between $14 trillion and $33 trillion. Furthermore, Accenture predicts that digital disruption, powered by AI, threatens to replace half of the companies currently in the S&P 500 within the next ten years as companies use advanced AI to transform their core operations and business models. Established enterprises that refuse to transform could find themselves perfectly suited for a world that no longer exists.

Source: McKinsey

While some companies are navigating the impact of AI, there are others that are starting ahead of the curve. Emergent AI companies are now finding it easier to enter the space. As these start-ups develop they are being helped by AI access through web services, significantly lowering their initial costs and barriers to entry. To highlight a few notable examples, giant tech firms such as NVIDIA and other major players are providing online courses. Meanwhile, Google, Facebook, IBM, Amazon, and Microsoft are trying to establish ecosystems around AI services provided in the cloud. Google Cloud Service offers machine learning infrastructure as a service. IBM enables high-performance access to latest-generation NVIDIA GPUs. Amazon Web Service offers a wide range of support for major machine learning frameworks.

Time will tell how disruptive AI will ultimately be, as the successes and failures will be determined by the ability to remain nimble and adapt in a changing landscape.

Source: Gartner (March 2018)

Looking Ahead 

The use of AI represents one of the most significant advancements in computing since the advent of the semiconductor chip. Bloomberg has even referred to it as the fourth industrial revolution. Though still in its early stages, AI is evolving rapidly—and every major government and private enterprise in the world are contributing to its development. In the end, every person will be touched by some type of intelligent machine. Opportunities and threats are only limited by our imagination. Although quantum computing may have an even greater impact, its practical widespread application is at least a decade away. We will cover that topic at a later date.

The next installment of this series will switch gears and address the impact of blockchain technologies and cryptocurrencies.

Bibliography

Bengio, Yoshua. “Machines Who Learn.” Scientific American. June 2016. www.scientificamerican.com.

CNET. “We played with Aibo: Sony’s $2,899 robot dog.” Online video clip. YouTube. 23 August 2018.

Frey, Carl Benedikt, and Osborne, Michael A. “The Future of Employment: How Susceptible Are Jobs to Computerisation?” Oxford: University of Oxford. 17 September 2013.

George-Cosh, David, and Hernandez, Daniela. “Canada Looks to Develop a New Resource: Artificial Intelligence.” The Wall Street Journal. 31 March 2017. www.wsj.com.

Ip, Greg. “We Survived Spreadsheets, and We’ll Survive AI.” The Wall Street Journal. 2 August 2017. www.wsj.com.

Parmar, Neil. “AI Holds Promise of Improving Doctors’ Diagnoses.” The Wall Street Journal. 12 September 2017. www.wsj.com.

Rosenbush, Steve. “The Morning Download: Spending on Artificial Intelligence Set to Explode.” The Wall Street Journal. 12 January 2017. www.blogs.wsj.com.

Sayler, Kelley, and Scharre, Paul. “Autonomous Weapons & Human Control.” Center for a New American Security. 7 April 2016. www.cnas.org.

“The return of the machinery question.” The Economist. 25 June 2016. www.economist.com.

Swarm Attack Scene. Paramount Pictures, 14 Dec. 2015, www.treknews.net/2015/12/14/star-trek-beyond-trailer-breakdown/.

Taranis in Flight. BAE Systems, www.baesystems.com/en/product/taranis.

“Artificial Intelligence.” What Will Happen to Websites When Personal AI Assistants Become Pervasive?, Quora, 22 Oct. 2016, www.quora.com/What-will-happen-to-websites-when-personal-AI-assistants-become-pervasive.

“Harpy 1 UAV V05 3D Model.” Harpy 1 UAV V05 3D Model, Cgtrader.com, www.cgtrader.com/3d-models/aircraft/military/harpy-1-uav-v05.

“Samsung SGR-A1 Robot Sentry.” Samsung SGR-A1 Robot Sentry Is One Cold Machine, Ubergizmo, 14 Sept. 2014, www.ubergizmo.com/2014/09/samsung-sgr-a1-robot-sentry-is-one-cold-machine/.

Axe, David. “X-47B Jet.” National Interest, 21 Aug. 2017, nationalinterest.org/blog/the-buzz/weve-caught-glimpse-northrops-carrier-launched-tanker-drone-21983.