Skip to main content

insight

As the pace of development in AI technology continues to accelerate, organizations must balance innovation with caution in the face of pending regulation. Kubrick Head of Data Management Simon Duncan examines the proposed EU AI Act as it awaits final negotiations to unpack the key takeaways which will help organizations adopt trustworthy and human-centric AI.

As of June 14, 2023, the EU AI Act was adopted by European Parliament with an overwhelming 499 votes in favour, 28 against, and 93 abstentions. A final text will not become law until the trialogue (3-way talks) involving the EU Commission, Council, and Parliament have been completed - after which organizations will likely have just 2 years to reach compliance. The Act will not only affect international organizations who interact with EU Member States, but will undoubtedly influence global AI regulation, or at least set standards for ethical AI practices.

Following in the footsteps of GDPR, Simon considers the context and origins of the EU AI Act, and the lessons learned to help organizations become more prepared for regulation which champions transparency and ethics. He breaks down the 12 titles that cover various aspects of AI regulation, including classifying AI systems into risk categories: Unacceptable Risk, High Risk, and Low or No Risk.

Where there is room for ambiguity, Simon outlines the documentation and guidance available. But this regulation is not a roadblock for businesses looking to adapt with AI; the EU AI Act demonstrates a clear interest in supporting innovation that isn't hindered by regulation but instead drives value from governance which betters the outcomes for organizations and individuals alike.

To learn more about the EU AI Act in practice, download the full report:



Latest insights