As AI evolves, agentic AI has emerged as one of 2025's defining tech trends - autonomous AI systems capable of decision-making and executing complex tasks with limited or no human input. Financial institutions are already exploring and adopting agentic AI to boost efficiency, scalability, and innovation. However, with transformative potential comes legal, and regulatory risks, particularly when third-party AI agents act on behalf of customers. This article explores the legal risks financial institutions face along with risk mitigation measures, as they look to embrace agentic AI.

Definition of agentic AI and its use cases

Due to the comparatively recent development of agentic AI, there is no universally accepted or harmonised definition. The UK government, for instance, defines agentic AI as AI systems composed of agents that can behave and interact autonomously in order to achieve their objectives. In contrast, the EU AI Act does not make a reference to agentic AI. Consequently, agentic AI is typically understood by its unique capabilities that distinguish it from other digital systems:

  • Autonomy: Agentic AI is not dependent on continuous instruction and oversight. It is able to make decisions and take actions proactively to achieve desired outcomes.
  • Goal-oriented reasoning: Agentic AI operates with a view to achieving certain goals or outcomes and can reason and independently select the best means of achieving them.
  • Engagement with external environment: Agentic AI is not limited to the confines of its own digital ecosystem. It can process new information from external sources (e.g. applications, sensors and databases) and interact with other digital systems. This creates the possibility of multiple agents collaborating with one another.
  • Unstructured data: Agentic AI can leverage machine learning technology, such as large language models (LLMs), to parse and analyse vast quantities of unstructured data.

Agentic AI's autonomy and goal-oriented nature makes it conceptually distinct from generative AI applications, such as ChatGPT, which require explicit human instructions or prompts to produce an output. Equally, agentic AI separates itself from other examples of robotic process automation (RPA), such as scripted chatbots, which complete repetitive, structured tasks. By comparison, agentic AI leverages machine learning technology to complete complex tasks and adapt to a variety of use cases. Agentic AI can, therefore, be considered an advanced category of AI that combines the versatility and competency of machine learning technology with the utility of RPA.

The attributes outlined above make agentic AI well-suited for a variety of uses cases in financial services:

  • Consumer-directed Financial Agents: Financial services customers could (albeit potentially in breach of their terms) give their online banking login credentials to third-party AI Agents to interact with financial institutions on their behalf (more on this below).
  • Automated and Continuous Investigations: Agentic AI could be used to continuously analyse extensive and diverse datasets for AML/KYC compliance and fraud detection with above human-level speed and accuracy, and proactively conduct further investigation where appropriate.
  • Natural Language Transaction Execution: Agentic AI could utilise natural language processing to initiate financial transactions based on conversational prompts.
  • Dynamic Risk Profiling: Agentic AI could be used to continuously assess and select relevant variables to match customer and investment risk profiles in real-time in response to market conditions, customer behaviours, or regulatory updates.
  • Real-Time Asset Management Services: Agentic AI could be used to autonomously monitor market conditions, execute trades based on predetermined investment strategies, and adapt portfolios to market developments.
  • Reactionary Market Research: Agentic AI could be used to independently generate market research and insights, reacting spontaneously to global events, market movements, and regulatory announcements.

Risks for financial institutions adopting agentic AI

Agentic AI shares similar risks for financial institutions as those posed by generative AI. However, its degree of autonomy, limited human oversight, increased attack surfaces and the possibility of AI agents interacting with each other could see such risks exacerbated:

  • Data privacy: Agentic AI's autonomy necessitates sacrificing a degree of control over its actions, which heightens the risk of AI agents sharing sensitive data (internally and externally). As AI agents become more sophisticated and capable, deployers may lose control over the decision-making process of the agents, including how they process personal data.
  • Cybersecurity: Granting AI agents the ability to access and, crucially, interact with internal systems and networks could create significant cybersecurity risks should malicious actors be able to control or influence AI agents, for instance via a prompt injection attack.
  • Bias and discrimination: Reduced human oversight increases the risks of overlooked instances of algorithmic bias, and agentic AI's ability to take autonomous actions could see such bias result in actionable losses for consumers. For example, the use of agentic AI for autonomously approving or denying loan applications based on biased data could result in systematic discrimination, causing financial harm to affected consumers and exposing institutions to regulatory penalties and reputational damage.
  • Scheming and deception: AI agents with misaligned goals may actively conceal their intentions until they find an opportunity to pursue harmful objectives. AI agents could disable oversight mechanisms and distort outputs.
  • Consumer Duty: The above risks could make it challenging for financial institutions to deliver good outcomes (e.g. providing accurate, free from bias, and explainable outcomes) to retail consumers, as required under the FCA Consumer Duty or equivalent laws in other jurisdictions.
  • Operational resilience: Outsourcing agentic AI systems for implementation in critical business areas, with a limited number of satisfactory AI model developers, could increase third-party dependency risks.

Risks posed by third-party AI agents interacting with financial institutions

The risks for financial institutions relating to agentic AI arise not only in its implementation internally. There is an increasing number of AI agents on the consumer market that mimic human behaviour on their own web browser (for instance, Operator by OpenAI). This enables AI agents to carry out online tasks autonomously such as internet shopping and travel planning. This technology could lead to an AI agent, having been provided a customer's online banking credentials, interacting with financial institutions to complete financial tasks on the customer's behalf. Such a scenario would undoubtedly introduce novel legal and commercial risks:

  • Malicious external agents: A financial institution may not be able to identify or verify the underlying instructions guiding an AI agent using its services. As AI agents are designed to emulate human behaviour, distinguishing between human users and AI agents may become challenging. This ambiguity could undermine existing mitigation measures such as cybersecurity controls, identity verification and fraud detection tools, which are designed with human users in mind.
  • Could the financial institution refuse access to AI agents? For some products, such as payment accounts, access to data by third parties is subject to licensing requirements in some jurisdictions. If the AI agent is provided to the consumer by an entity which is appropriately licensed then it may be difficult for the financial institution to refuse to provide access to the AI agent. However, this covers a limited data set and type of accounts. Until the status of AI agents is confirmed by relevant authorities, financial institutions should carefully decide whether to refuse access when an AI agent is identified, depending on the regulatory context.
  • Customer relationship disintermediation: Reliance on external AI agents could diminish financial institutions' direct customer interactions. This in turn could lead to a commoditisation of financial services which would negatively impact brand equity and client retention. Additionally, intermediating third-party AI agents could complicate the application of consumer protection laws and liability more broadly. For instance, how would disclosures or advice reach the customer; will customers understand the risks the AI agent is taking on their behalf; which entity would be responsible for bad customer outcomes?
  • Operational resilience: The potential surge in traffic from a multitude of AI agents continuously accessing financial institutions' online services could challenge digital scalability and lead to system performance becoming compromised.
  • Systemic risks: Rapid, and potentially co-ordinated, autonomous financial movements made by AI agents across multiple institutions could significantly increase systemic risks, potentially triggering market volatility or even liquidity crises. For example, if employed at scale, AI agents all reacting in the same way to reports of a liquidity issue at a bank could trigger a run on that bank by moving funds to other institutions, exacerbating the problems.

Regulatory framework in the EU and the UK

The regulatory environment surrounding agentic AI remains ambiguous, as existing frameworks regulating AI more broadly have yet to explicitly address agentic systems. Agentic AI applications will likely come under the EU AI Act's definition of an "AI system" (although the definition does not expressly refer to autonomous agents). Under the EU AI Act, providers and deployers of an agentic AI system will likely be subject to obligations based on the system's risk level as determined by its intended use. Some use cases of agentic AI in financial services would likely be considered 'high-risk', such as evaluating creditworthiness (notably, excluding instances when it is done for detecting financial fraud) and conducting risk assessment and pricing for life and health insurance. Additionally, the EU AI Act allows for the Commission to add use cases to the high-risk category where they pose risks to health and safety or will have an adverse impact on fundamental rights, taking into account "the extent to which the AI system acts autonomously". Therefore, for use cases which do not currently fall within the high-risk categories listed in Annex III, the extent of an agentic AI system's autonomy will be one of key factors in determining its risk classification and the corresponding regulatory obligations under the EU AI Act. However, what degree of autonomy constitutes 'high-risk', and what 'acting autonomously' means in this context, remain unclear.

Unlike the EU, the UK does not currently have an overarching AI law in the UK. Instead, it has to date adopted a sector-specific, principles-based and pro-innovation approach to AI regulation. In addition to applying existing laws such as the UK GDPR, Equality Act, the PRA Rulebook and the FCA Handbook, including Consumer Duty, to the development and use of AI systems, the UK government’s approach to ensuring responsible AI use is centred around five cross-sectoral principles:

  1. Safety, security and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

Although there is a lack of regulatory guidance on agentic AI specifically in the UK, developers of agentic AI systems should embed these principles by design and ensure ongoing compliance with all relevant existing regulations.

The delay by regulators to address agentic AI directly means that, to achieve compliance, financial institutions must interpret and apply existing laws and regulations as they apply to fast-moving developments and novel use cases.

Risk mitigation measures

Many of the risk mitigation measures that financial institutions are already taking with regard to GenAI are equally relevant to agentic AI, such as dedicated AI governance structures; human-in-the-loop oversight mechanisms and monitoring GenAI’s reasoning processes; thorough vendor due diligence and AI impact assessments; and record-keeping and activity logging. However, it is essential that existing mitigation measures are considered and updated in light of the compounded risk factors posed by agentic AI outlined previously.

Additionally, given agentic AI undoubtedly presents unique risks, financial institutions may need to adopt additional risk mitigation measures that may include:

  • putting in place appropriate contractual protections that effectively clarify liability allocation; requiring human approval for consequential actions;
  • restricting AI agent’s access to sensitive data and materials; maintaining the ability to terminate a deployment if suspicious behaviour is detected (i.e. kill-switch mechanism);
  • carrying out a red-team robustness testing; and
  • potentially utilising AI agents in governance tasks such as monitoring and enforcing obligations, standards and rules or to monitor other AI agents.

Given the current state of regulatory ambiguity, horizon scanning will be important as financial institutions look to deploy agentic AI, as regulators are already beginning to take notice. For example, in its recent AI Sprint, the UK Financial Conduct Authority (FCA) among other things explored how agentic AI could give rise to better consumer outcomes. Therefore, financial institutions should prioritise proactive engagement with regulatory developments so they are well-placed to comply with upcoming regulations and navigate the interplay of existing frameworks.

 

 

Authored by John Salmon, James Black, Louise Crawford, Daniel Lee, and Felix Scrivens .

If you would like to discuss the issues outlined in this article, please get in touch with a member of the team for more information.

For additional resources related to AI legal risks and regulations, please visit our AI Hub.

This article is for guidance only and is a non-exhaustive summary only of certain aspects of the points discussed and should not be relied on as legal advice in relation to a particular transaction or situation.

Please contact your normal contact at Hogan Lovells if you require assistance or advice in connection with any of the above.

View more insights and analysis

Register now to receive personalized content and more!