More than eight in ten (81%) financial services organisations using Artificial Intelligence (AI) have adopted the technology for customer service purposes, while three in ten (29%) use the technology to prevent and detect fraud, with a similar number (29%) applying it to risk assessment.
However, despite its growing use, key concerns remain, particularly around accountability and the potential for bias in AI-driven or AI-influenced decisions. Data privacy risks associated with AI also rank high among the sector’s concerns.
This is according to the results of a new survey by Ireland’s professional body for compliance professionals, the Compliance Institute, which polled approximately 150 compliance experts working primarily in Irish financial services organisations nationwide.
When asked what concerns, if any, they had regarding the use of AI in compliance and financial services:
- More than eight in ten (81%) compliance experts said that are concerned about the accountability and explainability of AI-driven decisions
- Seven in ten (69%) are concerned about the potential for bias in AI decision-making
- Six in ten (59%) are worried about data privacy and GDPR compliance risks
- Almost six in ten (56%) are concerned about a lack of regulatory clarity around AI.
Commenting on the survey findings, Michael Kavanagh, CEO of the Compliance Institute said:
“Given that chatbots and virtual assistants are such a common sight when surfing the internet today, it’s perhaps no surprise that our survey shows that of those organisations using AI, customer service is the main reason they do so. However, it is interesting too the level of disquiet around the use of AI in organisations, particularly around AI bias and the accountability of AI-driven decisions, perhaps suggesting an inherent distrust of AI. Ultimately, AI will never be able to replicate the empathy that humans can bring to decision-making – as well as the nuanced approach they can take.
While AI can have many benefits for the financial services sector, including its ability to detect fraud and to reduce customer service costs, its fast-growing capabilities and increasingly widespread use have raised concerns, particularly around privacy and misinformation issues and the lack of regularity clarity around AI.”

Other headline findings from the Compliance Institute research reveal that:
- AI-driven tools are not yet widely adopted in the financial services sector, with only 2% of organisations using them extensively and 18% using them on a limited basis.
- More than half of the firms (54%) are considering AI for compliance monitoring, fraud detection, or risk management.
- More than one in four (27%) have no plans to implement AI tools in the near future.
- Among organisations currently using AI, its use in personalised financial products (10%) or trading and investment strategies (3%) is less commonplace.
Mr Kavanagh added:
“With only one in five organisations using AI tools, and most of these only doing so on a limited basis, the financial services sector is clearly cautious about the use of AI in firms. The finding that more than half (54%) of the firms surveyed are considering AI for compliance monitoring, fraud detection, or risk management shows that many in the financial services sector have not ruled out AI – but they are being careful about if and how they might do so. This suggests that there is a strong awareness in the sector of the risks of AI and a determination to ensure the technology is used responsibly.
This is a positive reflection of the sector. While AI has the potential to deliver many benefits, it is important that AI is used in a safe and transparent way, and that the use and adoption of the technology is overseen so that harmful outcomes are prevented.”