A lot of hype surrounds AI. News reports warn about everything from mass automation displacing workers to sentient robots taking over. A moderate approach believes AI can be a powerful force for good. But to understand AI risks and opportunities, we first must understand what we mean by AI.
While any autonomous system that is non-biological and capable of learning falls into the AI bucket, various applications of AI do many different jobs already. From mundane task automation to anticipatory systems capable of mass personalization, AI is already at work transforming data into business outcomes, and industry leaders are just starting to explore what’s possible. Yet, AI is often misrepresented and misunderstood. Basic questions are often neglected: What is AI doing? How does it work? When is it being used?
Shifting to AI Reality
As we continue to lean forward toward the creation of intelligent systems, it’s critical to be forthright and transparent when employing AI. We must be thoughtful when determining the extent to which human reasoning and emotion are removed from the equation. Alone, the design challenges are demanding enough, yet the complexity increases when we add both the intricacies of the algorithms and a tendency of marketing messages to oversimplify the details in order to sell AI-branded technologies and services.
This confluence of design challenges, complexity and marketing gumption have fueled an AI hype pendulum, both in the marketplace and in the public eye. Extremists on the pendulum, both idealists and pessimists, prey on those who are less informed, those who are experiencing FOMO (fear of missing out), and those who don’t know the difference between an algorithm and an allegory.
We believe organizations must break free from the hype – to merge insights gleaned from human behavior with meaning distilled from big data to make AI responsible and effective.
Thought Leaders’ Point of View
We tested these assumptions at the recently concluded AI Summit in San Francisco, where we asked thought leaders, executives and practitioners from fields, including clinical psychology, neurobiology, operations research, robotics, software engineering, policy analysis and machine learning, to riff on topics such as corporate responsibility, human intuition, responding to surprise AI outcomes, “computational propaganda,” “AI food poisoning,” data IQ, conversational AI and robotic process automation.
We video-recorded these conversations and entitled the series “Staying Human” as a nod toward the fact that when we create powerful new technologies, it is incumbent on us to protect and honor those traits that make us human.
Key themes that emerged from our conversations with executives and thought leaders include the following:
- First, corporate responsibility and leadership aimed at the ethical and transparent use of AI is on the rise. This theme resonated for leaders at large companies building enterprise systems (including Microsoft and Google), as well as smaller businesses building more bespoke solutions (such as Sentient and Primer AI). Google Technical Director in AI Ron Bodkin stressed the importance of AI “doing right by humans” as an extension of Google’s guiding purpose across the entire organization, while Compellon, Inc. CEO Carv Moore introduced a new partnership with Cognizant, discussing the “clear box” capability, which translates data and transparent modeling into actionable client solutions.
- As decision-making beings, humans need to thoroughly consider the extent to which elements like implicit bias, intuition and trust play a role in the internal calculus for making decisions. Jerry Smith, Vice-President of Data Sciences and Artificial Intelligence at Cognizant, acknowledged the importance of data IQ and EQ in AI development across multiple industries, while author and therapist Molly Carroll shared the perspective that the best decision-making AI should be designed from a combination of our gut instincts and rational minds.
- Finally, following a five-year flurry of wild growth in the attention and investment targeted toward AI, many companies, big and small, are earnestly building solutions to enable AI instantiations for privacy protection, transparency of outcomes, accountability of data usage, and access to the underprivileged. Many discussed the need for legislated AI policies, but also acknowledged the need for organizations to self-govern as part of a larger outreach mission. Our interviews culminated in a fascinating conversation with Nidhi Kalra, Senior Technology Policy Advisor for California Senator Kamala Harris, who stressed the importance of human input in creating the policies allowing AI implementation into our social and economic systems.
There’s no doubt that AI continues to hold significant promise and that partnerships which bring complementary views and expand organizational capabilities, like the one between Cognizant and ReD Associates, will be critical in keeping human potential and ethics at the forefront of AI design and implementation activities.
The entire “Staying Human” interview series will be available on Cognizant’s AI website later this month.
Subscribe to our newsletter and get expert insights straight to your inbox.×
SUBSCRIBE TO OUR NEWSLETTER✖
THANK YOU FOR YOUR INTEREST IN DIGITALLY COGNIZANT.
We’ll be in touch soon.