The Countdown Begins: How Enterprises Can Prepare for the EU Artificial Intelligence (AI) Act

Share:

Artificial intelligence (AI) is transforming the way businesses operate. From providing accurate predictions to answering questions, AI is changing the game. As AI usage continues to explode, regulations are catching up.

The European Union is moving to regulate AI with its upcoming EU Artificial Intelligence Act, which is expected to pass by the end of 2023. This legislation will establish legal requirements that govern how AI services are built and used. The intent is to help users capture benefits from AI in a safe, secure and trustworthy manner that aligns with the EU’s values and fundamental rights.

The End of “Wild West” AI?

The EU AI Act is intended to clamp down on the “Wild West” use of AI and create conditions for fair and development and trustworthy use. The act has global reach like GDPR and is applicable across industries.

At its core, the EU AI Act provides risk classification for use cases of AI. Intrusive and discriminatory uses of AI are classified as unacceptable risks and are prohibited. High-risk AI use cases (e.g., credit scoring) are subject to strict obligations. AI use cases that are determined to pose minimal or no risk will remain unregulated.

Enterprises that want to build or use high-risk AI systems are subject to legal requirements defined in the EU AI Act. These requirements impact the design and development of AI solutions and require certain obligations, including monitoring its use in service, risk management and information and data governance.

Before a high-risk AI system is brought to the EU market, a conformity assessment must be carried out by the provider to demonstrate compliance with the requirements of the EU AI Act. Post the conformity assessment, CE marking is provided to affirm the goods' conformity with European health, safety and environmental protection standards, and providers of AI models must register their high-risk AI systems (HRAIS) in the public EU HRAIS database.

The act proposes to set up a European AI board to advise the commission and National Supervisory authorities to oversee the implementation of the act. This creates much needed guardrails to ensure transparency and explainability in the use of AI.

The Opportunity: Balancing AI-Led Innovation and Potential Impacts on Society

Unregulated AI can have harmful effects on society. We don’t always know where an emerging technology will take us. However, the EU AI Act provides an opportunity for enterprises to continue using AI as a competitive edge while also being responsible to the larger society. Having such legislation in place gives enterprises a legal certainty that they comply with European laws and values. When implemented well, this can increase trust in AI-driven offerings among customers, which in turn drives innovation and adoption of AI.

How Can Enterprises Prepare for the EU AI Act?

Enterprises that use AI systems will need to act now. The EU AI Act plenary vote was approved by the EU lawmakers on June 14, 2023. This will now progress to the trilogue negotiations and is expected to be implemented by the end of 2023.  This is followed by a grace period for businesses to comply, which is estimated to be 24 months.

Failure to comply with the law will be subject to fines of up to 30 million Euros or up to 6% of total annual worldwide revenue. While the definition of AI and the classification of high-risk AI under the new legislation is still evolving, the scale of the potential penalties puts enterprises in a position where they must prioritize the work necessary to prepare with the following measures:

  • Inventory AI use cases to establish transparency around the use of AI, identify high-risk use cases and perform gap analysis to understand what is required to comply with the essential requirements of the act.
  • Establish frameworks and policies for AI use to ensure your use of AI is compliant with EU legal requirements and other upcoming AI regulations around the globe.
  • Establish an internal AI governance structure to implement and monitor ethical AI frameworks and policies.
  • Define data governance policies to support a data-driven enterprise and compliance with legal requirements to ensure AI is not used as a source of discrimination.
  • Review existing AI model monitoring to ensure compliance with post-market monitoring as defined by the EU AI Act.

Regulations like the EU AI Act are forcing enterprises to evaluate how they use AI services, its broader impact on society and compliance with legal requirements. While the EU AI Act continues to be amended and the final version is subject to change, enterprises have enough information to get ready for what is coming.

Read the article ChatGPT and the Ethical and Legal Implications of Data and Technology.

ISG helps enterprises navigate a quickly changing AI regulatory landscape. Contact us to find out how we can support you.

 

Share:

About the author

Diwahar Jawahar

Diwahar Jawahar

Diwahar is a Principal Consultant at ISG’s Cognitive & Analytics advisory practice in EMEA. He helps clients leverage data, analytics, and AI to become data-driven enterprises. His advisory experience includes data-to-value realization, shaping data analytics & AI adoption strategies, building enterprise analytics capabilities, and sourcing implementation partners or technology vendors.