Artificial intelligence, particularly machine learning, encompasses a multifaceted set of technologies with many applications in the life sciences sector, such as cure research, diagnosis, and treatment, mitigation and prevention. Despite the vast potential of AI, the AI regulatory field is in full bloom in some places and merely bare branches elsewhere – depending on public and policy maker interests in cultivating a given patch. 

For industry leaders to reap a bountiful harvest from AI, they need to actively monitor policy and regulatory debates as they pursue the scientific and technical advances that AI can empower. 

Because of the complexity of AI and its ability to potentially run afoul of human preferences, policy makers are mostly exploring, rather than finalizing, legislation to tighten controls. For policy makers, consumers and industry players, it’s vital to participate in the policy making process to develop guidelines that benefit all stakeholders, including society at large, because AI is a more complex regulatory target than virtually any prior technology category. Even experts cannot necessarily agree on AI’s precise definition, nor even what intelligence or consciousness itself is, let alone how AI is to be controlled wisely.  

From a life sciences perspective, the AI regulatory issues that loom largest involve data privacy, ethicality/bias, verification/assurance and accountability/liability. The ethical and responsible use of AI, including issues of bias inherence, is still evolving. Nevertheless, companies should anticipate and develop governance and AI usage models that integrate social responsibility and ethical use compliance into their AI development and commercialization efforts. 

Progressing Proactively and with Precision

In a complex and evolving AI regulatory environment, we recommend companies adopt the following tri-pronged approach to position themselves for success. 

  • Governance models: As an evolving set of “intelligent decisioning” tools, AI represents unique challenges to development, regulatory approval (if necessary) and ongoing compliance. For starters, current quality, compliance and documentation processes need to be extended to include AI solutions. AI requires modern software development processes, iterative testing protocols and embedded quality testing.

    An AI governance model may also need a broader set of stakeholders with relevant regulatory experience, including IT, security and analytics, and legal and risk professionals. These stakeholders may need to become engaged with the initiative earlier and nearly continuously compared with traditional product development

  • Data management: AI solutions require huge amounts of data for training, analytical evaluation and continuous evolution. The processes for gathering and using data, including managing its security, must be adaptable and scalable. To confirm safety assurances and testing results, data testing and analysis models and tools need to provide auditable evidence. Data preparation and labeling for AI, whether for training or clinical sets, is a highly expensive and still labor-intensive process.
  • Reporting, tracking and validation: Effective AI solutions are adaptive, requiring a sustained analysis of the data and facts upon which they are built and the metrics and KPIs they are designed to achieve. The adaptive nature of AI requires agile software development processes, as well as supporting processes for safety and audit, risk management, QA and change management, tools and documentation. It’s critical to adapt existing reporting and tracking mechanisms to the dynamic, continuous processes of AI.   

AI can yield a bevy of benefits by expanding access to care, improving care outcomes and lowering the cost of care. However, its adoption and deployment at scale requires an evolution in operating and governance models, including monitoring of and potentially participating in the regulatory process. Leaders would be wise to begin adapting their teams and their organizations. 

Brian Williams

Brian Williams

Brian is Cognizant’s Chief Digital Officer for Life Sciences and is responsible for designing digitally enabled solutions to facilitate care access and... Read more

X Close