Download the PDF version
Regulation

How AI Regulation is shaping the future of fraud prevention and data privacy: Interview with Nicoló Rappa

Published:
27/11/2025

When Artificial Intelligence enters the conversation around fraud prevention, privacy concerns are often the first to surface. The idea of algorithms combing through vast amounts of financial data can sound like a privacy nightmare. 

However, according to our experts, privacy and innovation are not mutually exclusive — in fact, when implemented correctly, they can complement each other.

In this interview with Nicoló Rappa, Senior Legal and Data Protection Counsel at Cleafy, we explored how the European AI Act and other upcoming regulations are shaping the development, governance, and trust in AI models among financial institutions.

When we talk about Artificial Intelligence and fraud prevention, most people immediately think of privacy risks. Yet you argue that privacy and innovation can coexist. How so?

Understandably, privacy concerns are at the forefront when we discuss AI in fraud prevention. However, privacy and innovation cannot only coexist but often reinforce each other.

On one hand, technologies such as machine learning make it possible to detect anomalies and fraudulent activity at remarkable speed, protecting both users and businesses. On the other hand, there are now mature methods and frameworks that allow data to be analysed securely and lawfully — using minimised, pseudonymised, or even anonymised datasets.

The key lies in privacy by design: integrating robust safeguards from the outset. This includes strong data governance, encryption, access monitoring, and risk assessment processes. When these are in place, AI can operate effectively without exposing personal identities — and in many cases, it protects privacy more efficiently than manual processes ever could.In short, when innovation is applied correctly and responsibly, AI doesn’t undermine privacy; it helps safeguard it.

Where does this perception that privacy “limits” AI come from?

This perception stems from three main sources.

First, a cultural misunderstanding about data. For years, the industry has operated under the “more data = better results” mantra. As a result, principles such as data minimisation or processing restrictions have been viewed as barriers. Today, however, we know that advanced models can perform exceptionally well even when trained on reduced, pseudonymised, or synthetic data.

Second, there is a limited awareness of modern data protection techniques. Many still equate privacy with simply not using or not sharing data. In reality, the GDPR and modern privacy frameworks provide a wide range of tools — from encryption and zero-knowledge proofs to anonymisation and advanced access control — that enable innovation without compromising security.

Third, the media narrative. Stories of AI misusing personal data or violating rights have fuelled the notion that privacy and innovation are incompatible. In truth, these incidents are often the result of poor governance or flawed design, rather than privacy itself.

Ultimately, privacy is the framework that enables AI to develop safely, ethically, and sustainably over time.

How will future regulations, such as the European AI Act, change how companies manage data and fraud detection models?

The new wave of AI regulation, led by the European AI Act, is transforming how companies design, govern, and monitor fraud detection systems. The impact will be particularly strong across four areas: governance, transparency, data quality, and human oversight.

1. Classification as “high-risk” systems.
Most fraud detection models will likely fall under the “high-risk” category, as they can directly affect individuals’ rights and freedoms. This means stricter requirements for data quality, model robustness, transparency, and human supervision.

2. Stronger data governance.
The AI Act emphasises the importance of utilising relevant, unbiased, and well-documented data. Companies will need to ensure continuous monitoring and validation of their models throughout their lifecycle.

3. Transparency and documentation.
Organisations will be required to clearly document the purposes of their models, the decision logic, data sources, and performance metrics. The ability to explain how a model detects fraud will become a legal — not just operational — obligation.

4. Human oversight.
Fraud detection systems can no longer operate as “black boxes.” Human review, the ability to contest automated decisions, and regular reassessment of high-impact cases will all be essential to reducing false positives and discrimination risks.

What’s the next big step toward making AI not just compliant, but responsible?

The next frontier isn’t merely compliance — it’s responsibility. Too often, organisations focus on ticking regulatory boxes rather than embedding accountability into the technology itself.

That means building models that aren’t just effective, but explainable. The ability to articulate why and how a decision was made will be essential. It’s not enough for an algorithm to be accurate; it must also be transparent, allowing for verification, challenge, and improvement.

Equally, human oversight must be meaningful. People involved in reviewing AI-driven decisions should understand what they’re evaluating and be empowered to intervene. AI doesn’t remove human responsibility — it redistributes it between humans and machines.

Ultimately, responsibility must extend across the entire AI lifecycle, encompassing data collection and supplier selection, as well as model maintenance and updates. This continuous, cross-functional approach is what builds long-term trust.

In essence, responsibility is how we’ll measure the quality of AI itself. The goal isn’t just to follow the rules, but to earn the confidence of those who rely on these systems.

What advice would you give to banks and financial institutions looking to adopt AI solutions but concerned about current or future regulations?

Waiting for regulatory certainty before moving forward often means missing valuable opportunities. AI is both powerful and complex, and integrating it successfully requires foresight and careful planning

The best approach is to start with responsible-by-design principles, where innovation is aligned with strong governance, risk management, security, and data protection.

This can begin with a few practical steps:

  • Map out use cases, assess their impact, and classify their risk levels.
  • Document data flows, model characteristics, and decision-making criteria.
  • Establish structured processes for ongoing risk evaluation and model monitoring.
  • Involve legal, compliance, and security teams early on to ensure a unified view.
  • Choose technology partners that prioritise transparency, traceability, and robustness.

By following this path, organisations can innovate safely and align naturally with evolving regulations such as the AI Act without needing disruptive adjustments later.

In the end, regulation shouldn’t be feared. It should be used as a compass that guides responsible innovation. When technology evolves alongside sound governance, it doesn’t slow progress; it accelerates trust, efficiency, and competitiveness.

Conclusion

As AI becomes a cornerstone of modern fraud prevention, regulation is not the enemy of innovation — it’s its enabler. The upcoming AI Act is prompting companies to reassess not only how they utilise data, but also how they integrate accountability into their systems from the outset. 

The message is clear: the future of AI isn’t just about compliance. It’s about trust. And in the financial world, trust has always been the ultimate currency.

Read more articles

Artificial intelligence

Scaling trust: how fraudsters use AI for social engineering

Read more

Prevention and detection

Attack Pattern Recognition (APR): what it is and why banks need it now

Read more

Prevention and detection

On-device fraud: a rising threat in online banking fraud

Read more