Skip to main content Skip to footer


November 23, 2021

Ethics by design: how to prepare for AI rules changes

The EU’s AI Act will require deeper education and evangelization to ensure more transparent and ethical use of AI.


With new regulations proposed, AI ethics — like data privacy — has become a top priority for companies. As heads of state from European Union member nations begin to discuss the EU’s proposed Artificial Intelligence (AI) Act, companies are exploring what it will take to adhere to one of the first major policy initiatives focused on harmful AI.

The answer is clear: Compliance will require businesses to educate and evangelize across their organizations. While it will likely take two years for these new rules to come into effect, it’s not too soon to prepare.

Thinking beyond profits

Ethics by design is a different way of thinking for companies. In addition to considering profit-driven outcomes, companies will now need to assess the harm and impact of their practices and provide oversight to manage AI risks.

Proposed in April, the AI Act aims at mitigating the harmful use of AI. It facilitates the transparent, ethical use of AI — and keeps machine intelligence under human control. The regulations would outlaw four AI technologies that cause physical and psychological harm: social scoring, dark-pattern AI, manipulation and real-time biometric identification systems.

Equally important to companies are the act’s proposed penalties. Fines for noncompliance are significantly higher than are those for the EU’s General Data Protection Regulation (GDPR), ranging up to €30 million, or 6% of annual revenue. In contrast, the GDPR imposes fines of up to €20 million, or 4% of revenue. 

Learning from data privacy’s rise to prominence

Beyond the penalties, we see strong parallels between the proposed AI Act and GDPR. When enacted in 2018, GDPR elevated privacy to priority status for organizations, and the AI Act will trigger a similar lift for AI ethics. The good news is that many of the processes that companies implemented as part of GDPR compliance can serve as the foundation for adopting responsible AI, such as the privacy impact assessments used when sensitive data is being processed.

But there are key differences between data privacy and AI ethics, and they make navigating this new territory more uncertain. For one thing, data privacy is about minimizing the data needed for an intended purpose; AI thrives on mining as much of it as possible.

Further, while there is general agreement on the level of personal details that need to be protected, AI ethics involves more nuanced and subjective issues such as bias and fairness. While most chatbots won’t meet requirements for the high-risk category because they are narrowly defined and task-oriented — think help desks — evaluating exceptions reveals the complexities inherent in AI ethics.

For example, a consumer-facing chatbot that helps consumers find social services could be classified as high risk because of its potential for impacting access and outcomes, especially for historically oppressed populations. It comes down to the likely harm to the individual.

Often overlooked in discussion of the AI Act is that the proposed regulations apply not just to personal details but also to the datasets, modeling and algorithms used in business decisions. The coverage of nonpersonal data has huge ramifications for businesses.

For example, the AI Act’s restrictions, if enacted, would apply to the data banks used to assess the level of risk for commercial loans. Data in these cases typically relates to the type of business, location and other details such as local crime rates. No personal data is involved, yet as an AI-driven business decision, the risk assessment falls under the act’s efforts to avoid the unintended AI bias and real-world harm seen in home mortgages.

Similarly, the AI Act would also cover the use of AI in functional chatbots related to business operation flows, such as directing employees to HR forms. While unlikely to fall into the high-risk category, functional chatbots will require additional justification to demonstrate that the processes aren’t intrusive. 

Preparing for the AI Act: educate and evangelize

By making the required changes to comply with the proposed regulations, businesses will see significant opportunities. They’ll understand how to safely implement AI and identify areas of opportunity and ROI, especially in regulated industries such as healthcare, banking and brokerage.

Protecting corporate AI investments will entail changes in culture, hierarchy and governance to ensure the precise risk management the AI Act requires. It’s a balancing act of protecting people and profits. We recommend organizations take the following steps to get started:

  • Expand responsibility for AI. By seating more people at the table for AI initiatives, organizations can deepen their understanding of AI’s human factors before they create models. Responsible AI’s success depends on a host of participants that extends beyond data scientists all the way to the C suite. To ensure transparency in AI decision-making processes, key contributors should include social scientists, ethnographers, industry and governance experts, and design thinking researchers who understand how people engage with computers.

    For example, we’re now partnering with a governmental entity to build responsible AI. By starting with a set of foundational principles, the agency’s cross-functional team is drawing on multiple points of view to better understand AI and create improved oversight.

  • Establish and empower new roles. Responsible AI is markedly different from compliance. It’s not about spot checks. To create AI efforts that adhere to the proposed AI Act requires new roles.

    For example, data ethicists are set to emerge as important influencers within organizations. The job description will include conducting risk assessments and ensuring AI-related regulatory compliance. But more important to the success of the position — and to AI initiatives — will be where data ethicists and AI teams sit within the organizational hierarchy. Ensuring they’re empowered to make decisions is key. AI ethics is much bigger than just understanding algorithms; buy-in from executive leadership is paramount.
  • Prepare to play in the sandbox. Among the AI Act’s provisions is the establishment of “regulatory sandboxes” similar to the ones used in the fintech industry. The idea is to create a window in which companies can develop, test and validate innovative AI systems before taking them to market. Companies remain responsible for risks that arise in the development and testing phase.

    Sandboxes are useful for encouraging innovation in high-risk or challenging situations and tackling thorny problems with AI. With their emphasis on exploring an application’s potential for harm, sandboxes also signal one of the biggest mindset shifts that the AI Act imposes on companies. 
Two years might feel like a long way out, but it’s a comparatively short window of time when it comes to reshaping organizations. Although the details of the AI Act will likely continue to evolve before enactment, regulation is coming. To avoid discovering that your models aren’t compliant, the time to act is now. 


Cognizant Insights Team
Cognizant

We’re here to offer you practical and unique solutions to today’s most pressing technology challenges. Across industries and markets, get inspired today for success tomorrow.



Latest posts

Related posts

Subscribe for more and stay relevant

The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition