Like humans, cognitive computing systems learn from an ever-growing collec­tion of knowledge. And clearly, business interest is high in adopting these systems to accelerate decisions and improve their accuracy. Businesses need a plan, however, for when intelligent systems learn in expected ways.

The fact is, cognitive computing systems do sometimes learn in ways their designers didn’t plan. In this way, cognitive computing systems are a lot like humans. A preschooler might “learn” from his or her cousin that it’s OK to swear. I might “learn” that kale gives me heartburn when it was really the cheesecake.

Real-world examples of unintended learning abound. Famously, the U.S. Army trained a computer vision system to detect camouflaged enemy tanks. The system performed flawlessly for testers, but not for the Pentagon. The designers eventually solved the mystery: The camouflaged tanks had been photographed on cloudy days and the empty forests on sunny days. The system had learned to associate cloudy days with tanks.

Three Ways that Cognitive Computing Systems Learn

What steps should you take when a cognitive computing system learns something other than what you intended? Like the brain, cognitive computing systems are built from interconnected neural masses. Also like the brain, they’re coded only once, developing new behaviors through learning. The maxim is code once, train forever. The remedy is to understand how learning went awry, and then re-train.

This learning happens in one of three ways:

  • The simplest type of training is presenting hand-chosen input-output pairs. Imagine a system designed to identify possible skin cancer by analyzing images. Image A is cancer; image B is not. This is straightforward machine learning. Accuracy remains fairly constant over time but does not improve.
  • More advanced systems are programmed to periodically consult learning content that developers identify, such as medical image databases. The code might say, “When your error rate increases to X percent, re-learn until your error rate approaches 0 again.”
  • The smartest cognitive computing systems, called self-learning systems, are programmed to seek out new content anytime, anywhere: websites, FTP sites, databases, streaming data from the Internet of Things, PDF documents and so on. Self-learning systems learn continually, leading to better decisions.

Returning to the cancer example, if you tell a self-learning system, “Find images of skin cancers,” it searches every source available over its network connection. Learning from more images increases its diagnostic accuracy. An advanced system might even learn alternate names for the same condition, leading to the discovery of additional images.

When Self-Learning Systems Go Awry

Self-learning systems offer the greatest potential to take over human decisions because they never stop learning. But without a human to curate their sources, these systems are also more likely to learn something that deviates from what their designers intended. Imagine that the self-learning system for identifying skin diseases discovers images in which a tattoo is hiding melanoma. It might erroneously conclude that tattoos are malignancies. False positives would rise, and you wouldn’t know why. The less control you have over a system’s learning inputs, the more difficult it is to identify the source of unintended learning.

What’s the remedy when self-learning systems go off-track? Recoding is not the answer – the maxim in cognitive computing is code once, learn forever. You wouldn’t recode a cognitive computing system any more than you’d recode the swearing toddler’s DNA.

Updating the database is also not the answer. Cognitive computing systems don’t have a database. Neither is adding a rules engine. Static rules can’t evolve. The only antidote to unintended learning is re-training.

Role of the Digital Psychologist

The first step in re-training a cognitive computing system is finding out what the system learned, and the source. This task requires a new job description: digital psychologist.

The digital psychologist assesses the “patient’s” history, noting when the unintended behavior emerged. Activity logs list the sources the system visited within that timeframe. After identifying which source caused the unintended learning, the digital psychologist conducts a controlled learning session. A system that has learned incorrectly that tattoos are cancerous, for example, is shown images of non-malignant tattoos.

Human psychologists also administer tests in their quest to understand the sources of undesired behavior. Unfortunately, tests do not yet exist for cognitive systems. When they do, the digital psychologist might be able to simply ask, “Why do you think this image shows skin cancer?” “Where did you learn that tattoos indicate melanoma?” When available, enterprise-class testing tools will accelerate application development and test and reduce anomalous behaviors.

Role of the Digital Sociologist

Some cognitive computing systems operate in a decentralized manner: Teams of small, inexpensive, autonomous objects work together to complete a complex task. For example, the U.S. military has successfully tested a swarm of more than 100 autonomous micro-drones. These systems have no centralized controller, and yet the drones can quickly make collective decisions and fly in adaptive formations.

A natural phenomenon - hundreds of thousands of starlings collecting to fly together at dusk near the Solway firth and the Scottish town of Gretna.
Similar to decentralized cognitive computing systems, hundreds of thousands of starlings communicate with each other as they fly together at dusk near the Solway firth and the Scottish town of Gretna.

Training individual objects in a cognitive computing system to respond to each other in real-time requires a digital sociologist – another new job description. To train tiny bots to remain in proximity to one another, for example, the digital sociologist might develop a series of instructions such as, “If no bots are within one inch, find the nearest pair and move toward them.”

As sports fans well know, even if all individuals on a team perform well, the team as a whole may not. The same applies to distributed cognitive computing systems. When undesired emergent behavior appears, the digital sociologist needs to discover the reason and then re-train the individual objects to produce the desired group behavior.

Cognitive computing systems cannot act optimally on their own. Intelligent systems will always require humans to observe, guide and retrain them when their behavior goes awry.

This blog was adapted from an article that originally appeared on RT Insights’ Center for Cognitive Computing.

Jerry Smith

Jerry Smith

Jerry A. Smith is Vice-President of Data Sciences at Cognizant. He is a practicing data scientist with a passion for realizing busi­ness... Read more