European Union's Artificial Intelligence Act (EU AI Act)

What is the EU AI Act?

The European Union's Artificial Intelligence Act (EU AI Act) is a pioneering law that regulates the development and use of AI within the EU. Proposed in April 2021 and finalized in August 2024, it establishes a comprehensive legal framework for AI, positioning Europe as a global leader in AI governance.

The act aims to ensure the safe and ethical deployment of AI across sectors. It does so by adopting a risk-based approach and imposing different rules based on the level of risk associated with AI technologies.

Who does the EU AI Act apply to?

The EU AI Act applies to various entities in the AI value chain, including:

  • Providers, who develop — or have AI systems created — and place them on the market
  • Deployers, who use these systems in their operations (such as employing third-party chatbots)
  • Importers, who bring foreign-developed AI systems into the EU market

The act aims to regulate and ensure the safe deployment of AI technologies across these diverse roles.

Key provisions of the EU AI Act

The EU AI Act establishes a comprehensive framework for regulating artificial intelligence.

Key features of the Act include:

  • A risk-based approach to AI regulation
  • Strict rules for high-risk AI applications
  • Transparency and explainability requirements
  • Emphasis on data quality and bias mitigation
  • Mandatory human oversight for certain AI systems

The Act's implementation will be gradual, with different provisions coming into effect over a 36-month rollout.

The key stages are:

  • February 2025: Prohibitions on certain AI practices, such as social scoring and real-time biometric identification in public spaces, become effective.
  • August 2025: Regulations governing general-purpose AI systems enforced, affecting a wide range of AI applications across industries.
  • August 2026: Most rules for high-risk AI systems come into effect, giving businesses time to adapt their more complex AI applications.
  • August 2027: Regulations for high-risk AI systems used as safety components in products are the last to be enforced, recognizing the complexity of integrating AI into critical product safety features.

This phased approach gives businesses time to adapt to meet these milestones.

What are the penalties for violating the EU AI Act?

It's important to note the significant penalties for non-compliance, which include:

  • Up to €35 million or 7% of global annual turnover (whichever is higher) for the most serious infringements
  • Up to €15 million or 3% of global annual turnover for certain other infringements
  • Up to €7.5 million or 1.5% of global annual turnover for supplying incorrect information to authorities

How the EU Act impacts financial services firms

The EU AI Act presents new challenges for financial services firms, which must navigate new regulations in an era of technological innovation.

Key implications include:

Reassessment of AI systems:
Firms must review and categorize their AI applications based on the Act's risk classifications. Many AI systems used in the sector, such as credit scoring, fraud detection, or algorithmic trading, may fall into the high-risk category, requiring enhanced oversight and compliance measures.

Enhanced transparency:
The Act mandates that high-risk AI systems be transparent and explainable. This requirement may necessitate significant changes in how financial institutions develop and deploy AI, particularly in areas like automated lending decisions or investment recommendations.

Data governance overhaul:
With strict requirements on data quality and bias mitigation, financial firms will need to reassess their data management practices. This includes ensuring the representativeness of training data and implementing robust data governance frameworks.

Human oversight implementation:
Human oversight is mandatory for high-risk AI systems. This could impact efficiency gains from AI automation, requiring firms to balance compliance with operational effectiveness.

Documentation and reporting:
The Act requires extensive documentation of AI systems, including development, testing, and ongoing performance. This will necessitate new processes and potentially dedicated resources for AI governance and reporting.

Cross-border considerations:
While the Act applies to the EU, its effects will be felt globally. Any financial institution that touches (or plans to enter the) EU market will need to consider how to align its global AI strategies with EU requirements.

Innovation and competitiveness:
While the Act aims to foster trust in AI, there are concerns that stringent regulations might stifle innovation. Financial firms will ne ed to navigate these new rules while maintaining their competitive edge in AI development.

Best practices

With the EU AI Act now in force, financial services firms must take immediate steps to ensure compliance and mitigate risks:

1. Conduct an AI inventory. Thoroughly catalog all AI systems in use, including those provided by third parties. Classify these systems according to the Act's risk categories.

2. Perform a gap analysis. Assess current AI governance practices against the Act's requirements. Identify areas needing improvement, particularly for high-risk systems.

3. Establish an AI governance framework. Develop a comprehensive framework that addresses risk assessment, transparency, data quality, human oversight, and documentation requirements.

4. Review and update AI development processes. Ensure AI development methodologies align with the Act's principles, including transparency, explainability, and bias mitigation.

5. Enhance data management. Implement robust data governance practices to ensure high-quality, representative data for AI training and operation.

6. Train staff. Educate relevant personnel on the EU AI Act's requirements and the organization's compliance strategy.

7. Engage with regulators and industry bodies. Stay informed about regulatory guidance and participate in industry discussions to shape best practices.

8. Plan for compliance documentation. Start preparing the extensive documentation required by the Act, including AI system specifications, risk assessments, and ongoing performance monitoring.

9. Review third-party relationships. Assess the compliance of AI vendors and service providers with the new regulations.

10. Allocate resources. Consider forming a dedicated AI compliance team or expanding existing compliance functions to handle the new requirements.

What Smarsh is doing

The EU AI Act focuses on “system providers,” setting high-level requirements for different risk categories — with severe penalties if the appropriate controls are not in place. Smarsh supports financial services firms in complying with the EU AI Act through its comprehensive AI-powered solution, Enterprise Conduct. This platform enhances supervision and surveillance to manage data-heavy communications while addressing the increasing regulatory landscape. With Smarsh AI-powered products, our customers have one less big-ticket item to worry about, as they can trust our solutions to adhere to the latest regulatory requirements.

Moreover, Smarsh has a proven track record of staying ahead of regulatory change and trailblazing communications compliance solutions. Smarsh continues to work diligently to strategize and innovate around the EU AI Act and all evolving regulations and enforcements. With dedicated support services from Smarsh, including tailored training and 24/7 expert access, firms can confidently navigate the complexities of compliance, ensuring they are well-prepared to meet ongoing regulatory challenges and demonstrate adherence to necessary guidelines.

Related resources

Smarsh, Inc. assumes no liability for the accuracy or completeness of this information. Please consult with an attorney for specific information on specific rules and regulations and how they apply to your business.

  • Back to Regulations & Laws

Contact Us

Tell us about yourself, and we’ll be in touch right away.