Like the Industrial Revolution before it, the rise of artificial intelligence (AI) is the great story of our age. Entire industries and job categories will be created, elevated, reshaped and even eliminated, while entire classes of human interaction will be redefined. In addition to its societal impact, AI is forecast to add $15.7 trillion to the global GDP by 2030.
But although the march toward AI is happening right now, early technologies can be fragile. The enthusiasm from investors and early adopters doesn’t guarantee a successful outcome. There’s a lot of work ahead for artificial intelligence to reach commercial scale and become foundational to the economy and society.
AI’s ability to learn by example and fit into active learning environments will require a level of governance and ethical supervision that is unprecedented for a technology incubating for so long. As we entrust it with a wider range of tasks that previously relied exclusively on human judgment, AI will require the highest standard of care.
These changes will come amid a great deal of public scrutiny, chaotic realignment of business models and puzzled governments trying to decide when and how to get involved. Artificial intelligence will contribute to an end goal everyone wants – a better future – but only if we learn from the past and guide it to success. Despite AI being intelligent, we, not AI, are the ones choosing its time and place of participation – and that’s a responsibility we must take seriously.
The Human Condition and AI
With traditional technologies, the possible impacts and abuses are known and calculated before being released (ideally), and then they remain static. For example, new drugs are launched only after careful testing, and even if they end up exhibiting unintended at-scale side effects, they don’t redesign themselves after their launch. But AI does; it is continuously learning and can be deliberately pushed off course (as in Microsoft Tay) or drift from its intended purpose as new training comes into play or as people begin to react to it in a way that their pre-AI training data did not anticipate (a principle known as reflexivity).
AI thus requires nurturing, similar to what one would offer a child looking for guidance on morals and conduct, as well as training on skills and logic. This nurturing, in the form of governance from academia, companies, industries and governments, we believe, is a positive and necessary force. It will encourage innovation by offering guidelines, suggestions and standards.
History as a Guide
As disruptive as AI will be, it’s not the first time a new technology started small and then became so commonplace that we paradoxically can’t live without it but scarcely notice its presence. If we want AI to boost economic growth and quality of life, we’ll need examples from which to guide its expanding impact.
We recently examined this technology’s evolution by comparing its progress to that of the rise of computing power in the mid-1980s and the advent of the World Wide Web in the mid-1990s. The ‘90s ushered in the web as the glue holding pages and browsers together, creating a remarkable example of Bob Metcalfe’s network effect, whereby the more people who use something, the more useful it becomes.
By analyzing the progression of these game-changing technologies, we see a common set of stages that are likely to impact the evolution of AI:
- Standardization: Although only those with insider knowledge may know or care about the standards themselves, standardization creates an environment in which core sharing and the network effect (among technologists) can take shape. This is already happening in the machine intelligence space; take AI-as-a-service, which over time has made it easier to provision and operate AI services.
- Usability. Key AI components such as speech, vision, handwriting and image recognition are already available (pre-built) for rapid deployment, which is appealing to an early adopter audience.
- Consumerization: Typically following usability, consumerization brings with it an explosion of investment, return on investment and a virtuous cycle of attention that allows proliferation. This is already emerging, as AI makes itself known in conversational contexts (e.g., Siri, Alex, Google Home) and in the improved quality of spam filters and recommendation engines, etc. Rapidly arriving are self-driving cars, robotic surgery and other items that will soon scale to reach large populations.
- Foundationalization. The final proof of victory is when a given technology is considered so ubiquitous that it becomes the foundation of entire classes of economic and social activity. This is already the philosophy for many leading born-digital companies that believe AI is integral to their solutions and business processes.
Realizing the Full Potential of AI
For artificial intelligence to achieve its full potential, we need to guide its progression – and the public and legislative debate around it. For example:
- Business leaders can speak to AI’s role in providing better experiences for both employees and consumers, while also emphasizing the need to detect and mitigate bias and bring ethics into the conversation as a routine business topic.
- Tech leaders can speak to the technology’s ability to meaningfully address our growing mountains of digital data in a way that’s not possible with traditional technologies.
- Social scientists can look to artificial intelligence to provide companionship and context in difficult moments, or to uncover the deeper lessons hidden in billions of tweets and posts.
- Specialists can use their skills to “train AI” and thus scale across the globe. For example, talented radiologists could provide training for AI augmentation in remote villages, and skilled teachers could provide AI training to lead students through proven steps of success.
- Politicians and pundits can recognize the technology’s potential to improve productivity and use these gains to elevate the human quality of life, when properly applied.
We need to learn from history – and repeat the best lessons while avoiding failures. The introduction of computing power and the web provides inspiring examples of how technologies that were once “nice to have” turned into must-haves that we can’t imagine living without. AI will follow – as the great story of our age – and we will play our part in bringing it to maturity.
For more in-depth insight on this topic, read our latest white paper “Better Future through AI: Avoiding Pitfalls and Guiding AI.”
Subscribe to our newsletter and get expert insights straight to your inbox.×
SUBSCRIBE TO OUR NEWSLETTER✖
Thanks for your interest in Digitally Cognizant.
To complete the subscription process, please click the link in the email we just sent you.