Download the PDF version
Prevention and detection

How to spot APP scams before they escalate: Q&A with fraud experts

Published:
7/7/2025

APP scams are part of an extensive and growing fraud playbook. They’re carefully choreographed stories of manipulation; quiet, calculated moves that slip past conventional defences and stay hidden until the damage is done.

What makes them so dangerous is that they unfold across multiple stages and channels, often long before any money changes hands. It’s not just about spotting a suspicious transaction; it’s about understanding the full story, the subtle signals in user behaviour, session activity, device anomalies, and transaction patterns that most systems overlook.

In our recent webinar, “From manipulation to money movement: How to spot APP scams before they escalate”, we pulled back the curtain on how these scams really work: the tactics, the psychology, and the technical footprints they leave behind. Because understanding the full attack chain means spotting earlier signals, reducing impact, and exploring new ways to stay ahead.

We did it thanks to the experience and expertise of Michael Morris, Product Director at Cleafy, Elizabeth Finlayson, Senior Fraud Manager at Monzo Bank, and Becky Holmes, Best-selling author and romance fraud speaker. They shared real-world stories, expert insights, and live examples of how early detection and behavioural intelligence can help financial institutions stay ahead of APP scams. 

Questions answered by our experts

In this article, we collected some of the most frequently asked questions during the webinar and shared the answers with our internal experts to gather their feedback.

If you missed it, you can watch the webinar recording by clicking on the image below.

APP fraud is often seen as a human problem, but where does technology play a role in prevention or earlier intervention?

The biggest signs banks look for are unusual behaviour or unusual contacts. It’s less about the transaction itself and more about the individual involved. To catch scams early, you must look deeper into the person’s activity, not just the payments. You need to look for unexpected contacts, which are common in impersonation scams. For example, if someone suddenly says they’ve spoken to a bank, police, or other authorities unexpectedly, that’s a red flag.

Pressure tactics are another key sign. Say you’re buying a house—there’s naturally pressure to get the deposit paid on time, which is expected. But if you see pressure that doesn’t align with the actual purchase or situation, that’s an early warning. That’s when you start focusing on the user’s behaviour rather than just the transaction itself—what’s driving these pressure points? Why is there urgency?

Unusual spending patterns are also important: maybe a person who usually pays by card suddenly switches to bank transfers, gift cards, or cryptocurrency, which may be out of character. Changes to contact details or odd communication patterns also add to the user profile and raise suspicion.

On the technology side, remote software access is a huge factor now. It used to be associated mostly with unauthorised transactions, like malware draining accounts without consent. But now fraudsters often ask for explicit permission, convincing victims that they need access to manage investments or provide help. They walk the victim through every step on their device, which is powerful social manipulation.

These behaviours offer strong signals for any industry to question if this is normal for that user. Additionally, environmental and anonymity factors matter: Are IP addresses changing frequently? Is there access from different locations or devices? Are network settings being altered?

You move beyond traditional checks like “Is this a high-value payment?” to examining what happened beforehand. Has something changed in the user’s behaviour or environment?

Fraud detection is not one-size-fits-all. It requires looking at a combination of signals to catch early warning signs.

At Cleafy, we are focused on the user journey’s complete visibility: contact detail changes, spending behaviour shifts, different types of interactions, and environmental anomalies like device changes. The key is correlating all those data points to build a rich context. That context helps decide how to respond and tailor the approach based on what’s actually happening in each specific case. This approach helps prevent instead of react. 

What’s the usual reaction of victims after finding out they have been scammed?

Shame, guilt, remorse. A lot of people ask, “Where do I go now? Who do I speak to? Can I get my money back? If so, how?” That’s a common theme in fraud, but especially in romance fraud. There’s a real stigma around it. People feel like they’ve been duped, like someone’s got one over on them. And nobody wants to feel like that.

It’s embarrassing. And when it’s romance fraud, where your emotions and trust have been manipulated, it feels like you’ve been made a fool of, emotionally and financially. That emotional damage is often greater than the financial loss.

When you’ve been scammed, it shakes your confidence in your own judgment. And not just financially, but across your whole life.

What additional support or protections could be provided by banks?

A lot of the time, people don’t want advice from a bank: they want to go in, get their stuff done, and leave. Banks don’t always speak to them with as much respect as they need. Front-of-house staff, if you like, often aren’t trained to handle what is essentially quite an emotional issue.

One of the big issues around fraud is that there is no victim care. With any type of fraud, you’re told to send it to Action Fraud. You’re not going to get any emotional support there. 

Banks need to signpost people to the correct places. They need to train their staff better, not just tick a box and send someone on their way. It’s no good asking, “Are you all right, love?” and then off they go.

It’s also important to know that banks can call local police and request a welfare check. I’ve had cases where I spoke to someone, and the police turned up because they went into their bank, didn’t want to stay around or be signposted anywhere, and the bank was genuinely concerned. So they called the local police, who went and spoke to the person.

How important is collaboration between fraud and cybersecurity teams in detecting and stopping scams and fraud?

Working with cybersecurity is essential, as it offers a holistic view. The cyber team sees how attacks start, often spotting early signals through device and network data before they become obvious in fraud monitoring. This shared intelligence gives you a complete picture of where attacks originate in your ecosystem.

You get unique data and insights that are massive in individual cases. Shared intelligence and collaboration between fraud and cybersecurity teams is critical.

Fraud teams tend to focus on understanding individual victims and transactional activity, but security can offer a secondary review that speeds things up. For example, it can spot remote access across multiple people and enable rapid intervention before any funds are lost to others.

Working closely with security teams is essential because they offer an entirely different perspective. What might look like just pounds and pence or transaction volumes to fraud teams is supported by intricate data from security, from the moment you open the app, even before making a payment, or when you add confirmation. That’s where fraud and cybersecurity overlap, and it is critical.

What is something you would like to see across UK firms to collaborate effectively? How often do you ref CIFAS? What is your biggest wish to make across the industry for higher effectiveness?

CIFAS is valuable in helping financial institutions share intelligence on confirmed fraud cases, so we’re not all tackling the same problems in isolation. But it’s still essentially a reactive process; the fraud has to happen before it gets flagged. That’s helpful, but by then the damage is often already done.

What we’re seeing with Cleafy customers is a shift to something much earlier and much broader. Banks using the platform aren’t just spotting threats in real-time; they’re contributing to collective intelligence across the network. That could be anything from suspicious mule account behaviour to emerging threats like zero-day malware or session hijacking. If one bank sees it, the others are better prepared.

If I had one big wish, it’s for more FIs to invest in this kind of early visibility and collaboration. It would move us from reacting to fraud to actively preventing it, before the money moves and attackers get a foothold.

How well do you think the system and APP claims process set up Pay.uk works?

The Pay.UK framework is a really important step. It shows there’s real momentum behind making outcomes fairer for victims. However, in practice, the experience still varies significantly from one bank to another. That’s not necessarily down to evil intent; it reflects how complex these cases can be and how hard it is to balance speed, fairness, and fraud detection at scale.

What about scams involving B2B payments? Can corporate clients be scammed into paying regular invoices to a scammer's bank account?

This is correct. Many of the same tactics and techniques are used to carry out both authorised and unauthorised payments with businesses and in corporate banking. Fake suppliers, supplier impersonation and changing beneficiary details are examples.

In corporate banking, attacks are often more technologically advanced and occur over a more extended period through credential theft, remote access, and malware. 

The approaches discussed in the webinar about increasing visibility across the entirety of the user journey, leveraging multiple detection methods and technologies that work in harmony, and leveraging data across fraud and cybersecurity are key to detecting and preventing threats in B2B and corporate banking.

What type of APP scams do you suspect we will see in the future with advances in AI? How will scams evolve, and what should we be doing to educate customers on these risks?

There are two paths of evolution that AI will enable for threat actors. 

The first is scale. Social engineering, particularly vishing and other techniques, is a human-driven endeavour with constraints on the number of hours in the day, languages spoken, and the number of people employed in such activities. Generative AI will significantly reduce these barriers and allow threat actors to scale social engineering attacks across borders, 24/7. Expect deep-fakes to become more sophisticated. 

Second, AI will also lower the barrier to entry for technology to be used in concert with social engineering: we’ll see a return to malware-driven attacks as app stores open up, new devices become mainstream, and capabilities become attack vectors.

Regarding scams and education, it’s important to note the things that are unlikely to change, as discussed in the webinar. Scams prey on human emotion, our desire for quick wins or something too good to be true, leveraging a sense of urgency. Consumers need to continually be reminded to be aware of these unchanging points of leverage that attackers use.  

Do you think UK financial firms' systems do enough to scan across their network (aka within the company) to identify accounts indicating fraudulent activity and accumulating transactions? What more can be done within each firm?

The approach to stopping fraud needs to be a layered one, and this seems to be recognised by many institutions. However, there are gaps where too many still focus on the point of transaction as the time in which they try to detect and act; they only use one or two tools to solve against threats that leverage a whole arsenal of techniques and tactics; the relationship and sharing of data between fraud and cybersecurity is sub optimal. 

Firms need to consider a more comprehensive approach to defence that includes:

  • Real-time monitoring across the full user journey, from pre-login and encompasses both technical and behavioural data; 
  • Multi-dimensional analysis that correlates detection methods and data across sessions; 
  • Unified threat intelligence and contextual awareness leveraging external and internal data; 
  • Adaptive response orchestration that acts on anomalous signals and identified attack patterns in real time.

That’s what we call Fraud Extended Detection and Response approach.

How do you balance friction as a control environment to prevent the customer experience and remove unnecessary blockers from the consumer's duty? 

It’s important to note that friction is the necessary action to reduce risk and is inevitable. The best way to balance the deployment of friction is to:
a) Ideally, do it as early in the user journey as possible; blocking the transaction is the nuclear approach. Where possible, don’t wait for a user to reach that point;
b) Tailor your friction management approach. A one-size-fits-all “block transaction” approach is insufficient. Consider step-up authentications, password resets, in-app messages, and contact centre outreach as possible alternatives to blocking transactions.
c) Reduce false positive alerts that unnecessarily impact genuine users. The approaches outlined in the question above can be used to do this. 

Can you share an example of a scam where Cleafy helped fraud teams decide when and how to intervene without disrupting genuine users?

In our latest case study, you can read how we helped a European bank with over one million customers successfully tackle a wave of sophisticated scam threats. In doing so, the bank protected millions of euros and maintained the trust of its customer base during a time of heightened risk.

This success story offers a detailed look into the bank's challenges, the solutions it implemented, and the results that transformed its approach to fraud prevention. From improving threat detection to streamlining operations across departments, the story demonstrates how a proactive strategy can deliver measurable impact.


Read more articles

Prevention and detection

Fraud XDR: Smarter protection for modern banking - Interview with Mick Morris

Read more

Prevention and detection

Why APP scams don’t look like fraud until it’s too late: Interview with Carmine Giangregorio

Read more

People

The troubling effects of online banking fraud on mental health

Read more