When Jamtara first aired on Netflix, it felt shocking. A small town in Jharkhand was running a phishing empire with just cheap phones and rehearsed scripts. But watching it today, it seems quaint. Those boys dialled numbers one at a time, tricking people into sharing OTPs. Fraud operations hitting Indian lenders now do not need phone calls, scripts, or even real people. They use AI.
A few months ago, a mid-sized digital lender in India noticed something strange. Over a single weekend, they received 1,400 loan applications. Credit scores looked fine. Aadhaar numbers checked out. Bank statements were clean. Everything passed their fraud filters.
Except none of the applicants were real.
A fraud ring had used generative AI to create fake identities, complete with made-up employment histories, realistic selfies, and bank statements that matched expected income patterns down to the decimal. The lender caught it, but only after giving loans to the first 38 applicants. By Monday morning, those accounts had been emptied.
I wish I could say this was an exception. It is not. If anything, it is a warning.
Fraud has become industrialised. The factory operates on AI.
The RBI's annual report noted that digital payment fraud cases in India exceeded 36,000 in FY2023-24, with losses over Rs 1,750 crore. The actual number is likely much higher because today’s cleverest fraud never looks like fraud at all.
Synthetic identity fraud, where criminals mix real and fake data to create entirely new personas, increased over 100% globally between 2022 and 2024, according to research from the US Federal Reserve and TransUnion.
In India's fast-growing digital lending market, where quick disbursement is a selling point and KYC is increasingly digital-first, the attack surface is expanding rapidly.
What makes this wave of fraud different is not just the scale, but the skill. Fraud rings now use AI to study a lender's approval patterns, reverse-engineer their risk models, and submit applications designed to slip through. They test, adjust, and improve, just like a good product team would.
Legacy rule-based fraud systems were designed for a simpler time. They identify what they have seen before. Against what they haven't seen, they are nearly blind.
The AI arms race
Here is an uncomfortable truth the industry is just starting to face: fraud detection as a separate, downstream function is already outdated.
Most lenders today run fraud checks alongside or after their credit decision-making. An application comes in, gets scored for creditworthiness, then goes through a fraud filter. This made sense when fraud meant stolen identities and forged documents, which you could catch with simple verification steps.
But when the fraud itself is AI-generated and built to pass every verification point, the only way to catch it is by looking at signals that exist before verification even starts. Device fingerprints. Behavioural biometrics, such as how someone holds their phone, how quickly they type, and whether they pause to think or paste pre-filled responses. Network analysis – whether 200 applications are arriving from the same IP range, on the same device model, at suspiciously regular intervals.
These signals should not be trapped in a fraud silo. They belong in the underwriting decision itself. The credit decision and the fraud decision need to be made together, based on the same intelligence.
What this means for lenders
This shift is not just about technology. It involves a fundamental change in how lending institutions view fraud risk.
The more advanced AI-based anomaly detection models, commonly used to identify fraud, analyse historical data in clusters, and use historical applicant behaviour to build cohorts of similarly behaving customers. When a new application comes in, they are matched to their nearest cohort. And any behaviour different from their cohort behaviour is flagged as an anomaly.
The data used for fraud detection must go beyond what is in a bureau file. Behavioural data, device data, social media data, phone and email network data are no longer optional. AI-based algorithms can map association rings – for e.g., using an initial combination of name, mobile number, email ID to determine associated numbers, names, and email addresses. Looking at anomalistic behaviour across association rings can generate a deeper view of potential fraud.
Fraudsters adapt daily. Static defences become blunt very quickly. Machine-learning models typically learn from historical data, so new forms of fraud not prevalent in historical data are difficult to catch. Hence fraud models must be trained continuously, not just once every quarter. Otherwise lenders can get swamped analyzing a mountain of false positives.
Moreover, lenders need to stop viewing fraud as a cost centre and start seeing it as a key part of underwriting. Every rupee lost to a fake borrower is a rupee that could have gone to a real one. Each fake identity that slips through lowers portfolio quality and erodes the trust that regulators, investors, and borrowers place in digital lending.
The real question
India's digital lending market is expected to reach $515 billion by 2030, according to a Boston Consulting Group estimate. The potential is tremendous. But so is the risk, since the same infrastructure that allows a farmer in Madhya Pradesh to access a crop loan on his mobile also enables a fraud ring in another city to generate 500 fake applications before lunch.
The lenders who succeed will not necessarily be the ones with the fastest disbursement or the sleekest app. They will be the ones who understand that in a world where fraud is powered by AI, their defence must run on sharper AI – not added at the end of the process, but woven into every lending decision from the very first click.
The arms race has already begun. The only important question is whether you are still fighting it with yesterday's weapons.
Original Article
(Disclaimer – This post is auto-fetched from publicly available RSS feeds. Original source: Yourstory. All rights belong to the respective publisher.)