EU AI ACT: A cornerstone for business opportunities

After five years on the horizon, the European Union’s Artificial Intelligence Act (EU AI Act or AI Act) has finally been approved. The EU AI Act is the culmination of thousands of discussions and represents one of the most comprehensive efforts to date to provide guidance in the emerging area of artificial intelligence. Although most requirements will become applicable after 24 months post-adoption, some will become effective after six months, while some obligations for high-risk AI systems will become applicable only after 36 months.

The final document underwent numerous revisions during the drafting and consulting phases. The original proposal took a “technology-neutral approach”—meaning that the same regulatory principles should apply regardless of the technology used, as regulations should not be drafted in technological silos. The new amendments clearly deviate from this by introducing specific obligations for general-purpose AI models.

By introducing specific obligations for general-purpose AI models, the legislature risks the AI Act becoming too granular for any industry that deploys AI systems. This could potentially result in requirements losing relevance in the face of rapid technological developments. However, the challenges presented by this technology introduced to the public in the last couple of years are simply too great—and have raised too much concern in European capitals—to be ignored.

The European Union recognizes the tensions in creating rules that will be valid today and tomorrow. In an article explaining the framework, the European Commission (EC) stated, “The proposal has a future-proof approach, allowing rules to adapt to technological change.”

Peeling back the layers

The EU AI Act uses a “risk-based approach” concerning AI systems. There are four levels:

No/minimal risk,

Limited risk,

High risk,

Unacceptable risk.

The high-risk category contains one example that is very relevant to financial-services organizations: “essential private and public services” (e.g., credit scoring that denies citizens the opportunity to obtain a loan).

The EU directly states that there are several requirements for systems within a high-risk area:

Adequate risk assessment and mitigation systems,

High quality of the datasets feeding the system to minimize risks and discriminatory outcomes,

Logging of activity to ensure traceability of results,

Detailed documentation providing all the necessary information on the system and its purpose needed by authorities to assess its compliance,

Clear and adequate information to the deployer,

Appropriate human-oversight measures to minimize risks,

High levels of robustness, security and accuracy.

In other words, the AI Act means that a swath of the financial system will be required to show a robust commitment to ensuring their AI systems are reliable and trustworthy. Failure to comply risks large fines and regulatory actions that could cripple even the largest of institutions.

Understanding legal definitions

First, it is useful to define what AI means in the context of the law.

The EU AI Act defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The Act also refers to the general-purpose AI model, a recent advancement in AI. These models are developed using algorithms designed to optimize for generality and versatility in their outputs. They are trained on diverse data sources and extensive datasets to perform a wide range of tasks, including those for which they were not originally designed. General-purpose AI models can be integrated into various downstream systems or applications. This means a single general-purpose AI model can be used in numerous downstream AI applications. As a result, they have become increasingly vital in various applications and systems.

Challenges for financial institutions

The precise obligations outlined for general-purpose AI models, especially those with systemic risks, demand special attention. Financial institutions must align their development pipelines so they are in line with the new framework while also ensuring existing infrastructure is compliant. This comes as shareholders and investors pressure firms to use emerging technologies to deliver new products and improve returns.

Processes such as credit scoring, fraud detection and recruitment will generally be categorized as high-risk AI systems under the AI Act. The categorization of risk levels and the transparency, accountability and explainability obligations greatly raise the compliance bar.

The tight timelines for adherence may strain existing systems and processes, necessitating quick and effective actions to avoid heavy penalties. The burden of identifying potential model risks and implementing governance structures becomes critical. Institutions must juggle this alongside their routine operations.

Even though implementation might sometimes be a complex process, once completed, it will be consistent and easier to anticipate going forward.

Unique opportunities are on the rise.

By bringing in a common set of requirements, the AI Act should also create opportunities. Until now, organizations have been making their own decisions about governance and rules for artificial intelligence, which are now largely in the hands of regulators. This provides a clear, common path to follow. Even though implementation might sometimes be a complex process, once completed, it will be consistent and easier to anticipate going forward.

In some areas, it’s already clear that AI can deliver value for corporations and customers. For instance, AI-driven chatbots streamline customer support, enhancing the user experience. Additionally, advanced AI algorithms analyze vast datasets to uncover intricate patterns, empowering financial institutions in the predictive analysis that is critical for regulatory compliance, KYC/AML (know your customer/anti-money laundering) purposes and investment decisions. And this is clearly just the beginning. New generations of chips promise additional computing power and new applications.

The goal is to remove the “black box” element, whereby technology is a magic cube that mystically produces results. Instead, AI should be a tool that can be explained—and modified if needed.

The requirement to adhere to more transparent standards could pave the way for improved capabilities and greater trust in the technology’s deployment. Organizations will hopefully have a better understanding of—and can explain to the market—exactly what their systems and processes are doing and why. The goal is to remove the “black box” element, whereby technology is a magic cube that mystically produces results. Instead, AI should be a tool that can be explained—and modified if needed.

Moving forward: What steps should financial institutions take?

Now that the Act has been approved, it is time to take a thorough census of all current and planned AI applications inside an organization. The next step is to conduct comprehensive gap and risk assessments and classify the AI systems into risk levels based on the guidance provided by the EU.

High-risk systems must prove they have met the AI Act’s requirements. Various assessments will be required, including a conformity assessment, a data-protection impact assessment and potentially others. This will probably include a list of potential risks and how a company would respond in the event of the “materialisation of these risks”.

Institutions will need an appropriate AI-governance structure, internal policies and a model-risk-management framework with arrangements for human oversight, model validation, AI-system monitoring, record keeping, complaint handling and redress procedures.

Beyond compliance

AI is sparking a massive technological evolution for organizations. Financial institutions serve as responsible intermediaries between societies and governments by providing vital services. As they evolve their business models, they work sensitively towards sustainability, technology development and other real-world needs and requirements. As the AI landscape matures, the need for expertise—both inside and outside of companies—will only grow. The EU AI Act is simply one step on a much longer journey.

Retrieved from :  https://internationalbanker.com/technology/eu-ai-act-a-cornerstone-for-business-opportunities/

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Scroll to Top