When it comes to high-tech, nothing seems high-techier than artificial intelligence (AI). But when businesses get started on AI initiatives – which most respondents in our recent study are doing – the challenge today is less about technical questions and technology capabilities, and more about crafting a strategy, determining the governance structures and ethical practices required, and accelerating the move from experiments to full-scale AI adoption.  In other words, “real and responsible AI.”

What’s heartening is that among our clients, we’re hearing people ask more sophisticated questions when it comes to AI deployments. Rather than asking us for an “intro to AI,” businesses are further along in their understanding, wondering, for example, about the pros and cons of using centralized vs. federated AI teams. Many are targeting two to three core areas for experimentation and pilots, mostly with the end goal of improving customer experience. (Hear more on this from Poornima Ramaswamy, VP in Cognizant Digital Business, in the video below.)

 

 

Top Tips for AI Deployments

We discussed the essential components of a real and responsible AI deployment in our recent webinar. Here’s a quick summary of eight ways to ensure your AI initiative is off to the right start:

  1. Think of data science not as an isolated practice but as part of an AI value chain. The purpose of data science is to glean insights from terabytes of data. But in and of itself, data science is not the endgame.  Instead, it’s essential to fit data science into the AI value chain. The first stage is putting the data into a human-centric perspective: What is the human context for what the data is telling us? The second is using machine learning to discover and predict future patterns: What does the data reveals about future behaviors and trends? Lastly, through the AI initiative, the final stage is using this information to begin making AI-driven decisions.

 

 

 

  1. It’s not “that” you’re doing AI but “why” you’re doing AI. Many businesses are understandably anxious about where to start with AI. But the starting point has less to do with AI and more to do with the business itself. The right place to start, then, is understanding your business pain points and embracing the idea that AI is the best tool for resolving them. For example, most companies, other than the FANGs, would still exist without AI; however, they might need to improve the customer experience or increase claims processing efficiency, and the most effective way to do that is with AI.

 

 

 

  1. Leaving ethics out of AI is as bad as leaving ethics out of business. Two years ago, the big issue with AI was selecting a platform. More recently, a good deal of attention has been paid to training AI data and data curation. Now, the spotlight is shifting to AI governance and ethics. More businesses today understand that just as they have a chief ethicist responsible for the actions and outcomes of decisions made with human intelligence, they need to do the same when they turn over those decisions to AI.

 

 

 

  1. Businesses need to control for AI bias. On a closely related note, businesses need to ensure that they understand and can control for bias when constructing and continuing to run their AI systems. There are actually three types of bias to understand: bias that stems from the data used to train the system (if the data is biased, the outcomes will be biased); bias on the part of the employees creating the AI systems; and bias propagated by self-learning AI systems as they, themselves, create new AI systems. Businesses are responsible for controlling the behavior of their AI systems and ensuring they achieve desired and ethical outcomes. Sometimes, this means deciding to avoid entire business opportunities, such as when Google recently discontinued an AI project with the Pentagon. Human judgment is integral to guiding such ethical decisions on whether AI initiatives are consistent with the business’s values and customer priorities.

 

 

 

  1. It’s never too early to think about governance. Most organizations today recognize that AI isn’t “just another technology implementation” to be executed by the CIO office – it requires the entire organization to be involved. As a result, governance concerns, along with privacy and ethics, are being addressed as soon as AI experimentation begins. This is particularly true in highly regulated industries like financial services and healthcare. With governance mechanisms in place, businesses can be ready to shut down an AI initiative that’s showing unwanted results, such as what Amazon did when its AI recruiting system showed bias against women.

 

 

 

  1. Plan ahead for harmful human behaviors. We often hear about AI causing job loss; less discussed is humans’ potential for sabotaging AI systems. It wouldn’t take much, for example, for thieves to steal the tires from a self-driving pizza delivery truck – the autonomous vehicle would likely not be trained to defend itself by hitting the gas or shifting into reverse as a human might. Or a customer who might not berate a human customer service agent might think nothing of screaming at a chatbot in order to game the system. When designing AI systems, businesses need to recognize that people will likely behave differently when they know they’re interacting with an AI system instead of a human.

 

 

 

  1. Fixing AI failures is a team sport. With self-learning AI systems, continuous maintenance is a necessity. And when things aren’t going in the right direction, it’s not the programmers who should be consulted with – businesses need a team of ethicists, sociologists, marketers and others to diagnose the problem and set the system in the right direction. If your child was having problems in school, after all, you wouldn’t call the obstetrician who delivered the baby – you’d get the teachers, social workers and administrators at the school involved. So it is with AI. These systems are built with technology, but their evolution requires a human hand.

 

 

 

  1. Be open to the unforeseen – and non-AI – side benefits of AI. While you might begin an AI initiative with an interest in the technology, many organizations ultimately realize the positive changes that result from non-tech discoveries. An example is a hospital that used AI to better understand the factors determining health outcomes. Especially among patients in the U.S. Medicaid program, for instance, social inhibitors such as access to food, shelter and transportation were more impactful to health outcomes than weight, blood pressure or body temperature. Using natural language processing (NLP), the hospital was able to extract those hidden nuggets of information to ensure patients were connected with relevant social service organizations. In many cases, however, physicians were unsure of how to help, revealing process gaps that needed to be fixed in order to improve the patient experience. In this way, the AI system introduced the opportunity to improve the ecosystem by improving collaboration among hospital staff.

 

 

 

Strategy, ethics and governance – not just technology know-how – are essential components to real and responsible AI. Whether you’re just getting started or ready to go live with AI, it’s vital to combine the power of the technology with human judgment and ethical frameworks. (For more detail, view the outtakes or full webinar).

Related Posts

Jerry Smith

Jerry Smith

Jerry A. Smith is Vice-President of Data Sciences at Cognizant. He is a practicing data scientist with a passion for realizing busi­ness... Read more

Poornima Ramaswamy

Poornima Ramaswamy

Poornima Ramaswamy is Vice-President of Cognizant’s AI and Analytics Practice. With her 20 years of experience, she consults and works with clients across... Read more

James Jeude

James Jeude

James Jeude is Vice-President and Practice Leader of Cognizant Digital AI & Analytics Strategic Initiatives Group. His career has provided perspective for... Read more