Fintech Industry Examiner

When the Bot Says No: How Agentic AI Could Kill Junk-Fee Finance

When Max Levchin says the bots are coming for junk fees, he is not joking.

At Reuters’ Momentum AI Finance conference in New York this week, the Affirm chief executive outlined a world in which “agentic” AI systems quietly sit between consumers and financial products, reading fine print, comparing repayment terms and—crucially—refusing to touch anything with “gotcha” charges.

“I think we’re headed for a future where we have a profound elimination… of business models that are designed to prey on stupidity or lack of attention,” he told the audience, arguing that AI agents will simply decline offers that depend on late fees, teaser rates or deferred-interest traps to make money.

For an industry that still earns billions from precisely those features, that is not a throwaway line. It is a direct challenge to one of retail finance’s most resilient profit engines.

This feature looks at how far AI agents could actually go in dismantling junk-fee finance—and what that would do to business models in cards, BNPL, banking and payments.

The size of the junk-fee problem

“Junk fees” is an unlovely term for a broad category of low-salience charges: credit card late fees, overdraft penalties, “convenience” and “service” fees, foreign exchange mark-ups, add-on insurance, “account maintenance” charges and so on.

The White House Council of Economic Advisers estimates that just ten common types of junk fees cost U.S. households about $90 billion a year, or more than $650 per household.

Financial services are a large slice of that:

  • The Consumer Financial Protection Bureau (CFPB) estimates that Americans paid roughly $120 billion a year in credit card interest and fees between 2018 and 2020—about $1,000 per household.
  • Within that, credit card late fees alone have been costing households around $12 billion annually, with $14.5 billion charged in 2022 as delinquencies climbed.
  • Overdraft and non-sufficient funds (NSF) fees peaked at about $15.5 billion in 2019 across banks and credit unions. Despite subsequent policy changes, fresh research suggests consumers still paid about $12.1 billion in overdraft/NSF fees in 2024.
  • At the biggest U.S. banks alone, overdraft fees remain highly concentrated: ten institutions took in roughly $4.9 billion of overdraft revenue in 2024, with JPMorgan Chase and Wells Fargo each around the $1 billion mark.

In BNPL, the revenue mix looks different—merchant discount fees dominate—but late fees and other penalties are not trivial. A 2025 Affirm investor communication notes that “late fees and other gimmicks” contributed over 13 per cent of BNPL industry revenue in 2021, a figure that includes players that do charge such fees.

Regulators have responded with a rolling crackdown. The CFPB tried to cap most credit card late fees at $8, projecting savings of about $10 billion per year, before the rule was put on hold and then struck down in court this April. The agency has also finalized an overdraft rule that would lower typical big-bank overdraft fees from around $35 to $5, adding an estimated $5 billion in annual savings to consumers—though that rule too faces legal and political challenges.

The Federal Trade Commission, for its part, has adopted a “junk fees rule” targeting hidden fees in hotel and ticket pricing, requiring total prices—including resort and service fees—to be displayed upfront.

In short, the state is trying to legislate away junk fees. Levchin and his peers are proposing something more radical: letting software starve them.

What makes agentic AI different from comparison tools?

Consumers have long had access to comparison websites, budgeting apps and even browser plug-ins that flag cheaper options or coupon codes. Why would “agentic” AI be any different?

The term refers to AI systems that do not just make recommendations but act—they can search, compare, decide and execute transactions, subject to constraints the user sets. Bain & Company describes agentic commerce as tools that “work autonomously to discover, research and buy goods and services,” with early adopters including features such as Perplexity’s in-chat buying and Amazon’s “Buy for Me”.

Levchin’s argument is that, once shoppers delegate buying tasks to persistent agents that have access to their full financial picture, three things change:

  1. Cognitive limits vanish. “My dumb, young, naive self in college… would have been lost in the morass [of fine print] within seconds,” he said. “AI does not have a problem with that… it will find the ‘gotcha’.”
  2. Attention becomes programmable. An AI can be told never to accept offers that include late fees above a threshold, deferred-interest clauses, auto-renewing subscriptions without explicit reminders, or overdraft options above a set price.
  3. Responses are instant and consistent. Where humans forget, get tired or make exceptions, a well-designed agent will say “no” every time a product breaches those constraints.

That is why Levchin told one payments audience last month that “no AI model is going to pay snooze fees or reminder fees or late fees or fall for deferred interest.”

If that proves broadly true, much of retail finance’s existing economics looks exposed.

A wide, cinematic 3D illustration of a modern bank operations floor at golden hour. In the foreground, dark metal discs and cubes embossed with dollar and warning icons sit on a glass table, representing junk fees. To the right, a translucent glass panel shows a minimal checkout interface with a glowing AI shield symbol. A thin, illuminated data line runs from the panel toward the fee blocks, then bends and redirects toward a simple payment card and wallet icon, suggesting an AI agent blocking junk fees and routing a safer transaction. The background shows softly blurred trading screens and a distant city skyline, with no logos or text visible.

The economics of “preying on inattention”

Economists have long argued that junk fees flourish because of attention frictions. Customers focus on headline rates and monthly payments; they underweight contingent charges that may or may not hit them later. Firms compete on visible prices and quietly recoup profit in the small print.

The White House CEA’s junk-fee brief makes the point starkly: these fees are a kind of “invisible tax” that inflates prices while dulling normal competitive pressure.

Credit cards are the most obvious example:

  • Issuers have historically relied on late fees, penalty rates and “trailing” interest to maintain returns on portfolios where headline APRs are capped or heavily advertised.
  • CFPB numbers suggest that late fees alone account for a large fraction of total fee revenue—around $12–15 billion annually in recent years.

Overdraft is similar. Banks market “free” checking accounts and then charge $30–$35 a time when customers slip into the red—often multiple times in a day. Prior to reforms, Americans were paying over $15 billion a year just for overdraft/NSF, and the average household that incurred such fees spent more than $200 annually on them.

In BNPL, late fees are smaller in absolute terms but punch above their weight. Industry research and company filings show that late fees, penalty charges and other ancillary income together contribute a mid-teens percentage of revenue for some providers, even before counting the effect of consumers rolling balances into more expensive products.

The whole structure assumes that humans will:

  • Miss a payment here and there.
  • Forget to cancel an auto-renewing subscription.
  • Fail to notice that a “0% financing” plan retroactively charges double-digit interest if they slip by a day.

Agentic AI is, fundamentally, an attempt to remove those errors from the system.

What the bot actually does at checkout

Imagining an AI agent as a smarter price-comparison tool underestimates its potential leverage. In a mature version of this future, your agent might:

  • Parse all the terms. For each offer—credit card, BNPL plan, overdraft enrolment, subscription—it would read the full terms and conditions, not the marketing summary.
  • Model the outcomes. Using your spending history and cash-flow forecasts, it could simulate the probability that you incur a late fee, hit an overdraft, or trigger a deferred-interest clause.
  • Score the “junk-fee risk”. It would assign each product a risk score based on the proportion of expected costs coming from contingent penalties versus transparent, upfront charges.
  • Block or warn. If a product’s junk-fee risk breaches your chosen threshold, the agent either refuses to proceed or requires explicit override.

This is not sci-fi. In open banking markets, AI systems already connect to users’ current accounts, categorize transactions and forecast cash flows. Agentic AI is now being deployed in payments orchestration, optimising routing to minimise processing fees and failures.

Extend that from “minimising merchant costs” to “minimising consumer junk fees” and the same tooling becomes a quiet but ruthless negotiator on your behalf.

Will consumers actually let bots do this?

There is one big caveat: trust.

Surveys suggest that while tens of millions of people have tried generative AI, far fewer are ready to hand it their wallet. Bain & Company’s 2025 research finds that about 72 per cent of U.S. consumers have used some form of AI tool, but only 10 per cent have used AI to buy something, and just 24 per cent say they would be comfortable letting an AI agent complete purchases autonomously.

Other polling paints a similar picture: around two-thirds of U.S. shoppers say they would not allow AI to make purchases for them even if it meant better deals, citing worries about data use and whose interests the AI really serves.

Yet history suggests that convenience wins. Many cardholders once balked at auto-pay; today, automatic bill payments are ubiquitous. Browser-saved passwords and one-click checkouts were initially controversial; now they are default.

If AI agents start as supercharged watchdogs—highlighting hidden fees and nudging users to safer options, rather than buying autonomously on day one—consumer comfort could rise quickly. And crucially, the threat they pose to junk-fee economics materialises long before agents handle 100 per cent of spending.

If even a minority of households delegate bill-paying, subscription management and high-risk credit decisions to bots with a “no junk fees” rule, the marginal buyer of those products changes. Pricing that relied on confusion becomes harder to sustain.

Where the impact hits first

1. Credit cards and penalty-fee economics

Credit cards are both the largest and most exposed category.

From 2018–2020, the CFPB estimates that Americans paid around $120 billion per year in card interest and fees, with late fees representing a particularly costly and frequently applied component.

AI agents could attack that in several ways:

  • Never missing a due date. Setting up and dynamically adjusting payments so the card is never accidentally late, wiping out much of the revenue from late fees.
  • Refusing “deferred interest” offers. Declining store cards or promotional plans where a single late payment triggers retroactive interest.
  • Portfolio optimisation. Continuously shifting balances to cards with lower effective costs, throttling rewards cards that recoup value through penalty structures.

Over time, issuers whose economics depend heavily on penalty fees might see their best customers—those with stable incomes and good credit—migrate to cleaner products blessed by the bots, leaving a riskier pool behind.

2. BNPL and the fight over late fees

BNPL has grown into a mainstream channel. Reuters estimates U.S. online purchases financed by BNPL reached about $82.4 billion in 2024, up nearly 10 per cent year-on-year, as Klarna, Affirm and others embed themselves deep into e-commerce flows.

Affirm’s positioning here is deliberate. It does not charge late fees and has been lobbying for sector-wide caps, contrasting itself with rivals that rely more on penalties.

In a world of agentic AI:

  • BNPL products with simple, transparent schedules and no punitive back-end fees will be easier for agents to approve.
  • Plans that rely on fees for missed payments or opaque roll-over options will be systematically deprioritised or blocked.

Levchin’s bet is that this push will favour BNPL firms structured more like credit utilities—earning returns from merchant discounts and disciplined underwriting—rather than those closer to fee-driven revolving credit under a different name.

3. Overdraft and “free” current accounts

Overdraft is trickier because, for some customers, it is a deliberate liquidity tool. Still, an agent that watches cash-flow in real time can:

  • Warn days in advance that a scheduled payment will trigger overdraft.
  • Propose alternatives: short-term instalment plans, rescheduling bills, or tapping cheaper lines of credit.
  • Auto-decline “courtesy” overdraft coverage above a chosen fee ceiling.

As U.S. regulators push to cap overdraft fees at $5 for large banks—potentially saving consumers another $5 billion per year—agents could accelerate behavioural change by simply refusing to pay $30–$35 for a short-term overdraft when cheaper alternatives exist.

For banks whose deposits franchises still depend on such fees, the risk is clear: the AI will not be shy about pointing customers to rivals that do not rely on overdraft revenue.

Private enforcement vs public regulation

One way to see this shift is as a form of private enforcement.

Regulators are attempting to ban or constrain junk fees via rules: capping late fees, forcing “all-in” prices, cracking down on dark patterns in subscription flows.

But these rules are:

  • Narrow (they apply to specific industries or fee categories).
  • Contested (many are tied up in litigation or vulnerable to political swings).
  • Slow to adapt to new tricks.

AI agents, by contrast, can enforce whatever norms users choose, regardless of whether regulators have yet caught up. A consumer could tell their agent:

  • “Never accept an offer that charges more than $8 in late fees.”
  • “Decline subscriptions that cannot be cancelled in one click.”
  • “Avoid accounts with overdraft fees above $5.”

In effect, that consumer is imposing a personal rulebook on the market. If enough users do the same, the constraints start to look like a quasi-regulatory standard, enforced not by a bureau in Washington but by millions of bots at checkout.

This does not make traditional regulation obsolete. It does mean, however, that firms can no longer rely on regulatory lag or consumer inattention to preserve junk-fee economics.

New moats: transparency, data and “Know Your Agent”

So who wins in this landscape?

Transparent lenders and payment networks

Firms that already rely on clear, upfront pricing—no late fees, no compounding penalty interest, minimal foreign-exchange mark-ups—should find it easier to get whitelisted by consumer agents.

Affirm is obviously positioning itself in that camp, as are some challenger banks that have pre-emptively abolished overdraft and certain penalty fees.

Card networks that can expose richer data (on effective APRs, fee structures and dispute rates) via APIs will also be better placed to integrate into agentic flows that compare not just headline rates but all-in, realised cost of credit.

AI infra and open banking platforms

Agentic AI relies on:

  • Fine-grained financial data—transaction histories, upcoming obligations, credit lines.
  • Access hooks—APIs that allow agents to initiate payments, move funds, and manage subscriptions.
  • Identity and permissions frameworks—so merchants, issuers and regulators know which agent is acting and under what authority.

Open banking aggregators and payment initiators are already building much of this plumbing. Consultancy pieces on agentic finance suggest 20–30 per cent cost savings in financial services workflows when AI agents handle routine decisions.

The strategic question is whether these players remain quiet pipes—or whether some evolve into trusted consumer agent brands in their own right.

“Know Your Agent” as the new compliance layer

If bots are doing the shopping, regulators and merchants will inevitably worry about:

  • Liability: Who is responsible if an agent makes a bad purchase or misses a payment?
  • Manipulation: Could bots be bribed—via affiliate fees or biased training data—to favour certain products?
  • Security: What happens if an agent is compromised or behaves maliciously?

Legal and industry analysts are already floating the idea of “Know Your Agent” (KYA) frameworks, analogous to Know Your Customer rules, where AI agents are given digital fingerprints and must disclose who they represent and how they make decisions.

For fintechs, that implies a new competitive dimension: not just who has the slickest UI, but whose agents are most auditable, configurable and independent.

The risk of “junk bots”

There is also a darker scenario.

If consumers do not own or control their agents—and if models are trained on or steered by commercial incentives—the next generation of junk fees could be bot-targeted, not human-targeted.

Imagine:

  • Agents that optimally minimise visible fees but quietly steer users toward products that pay the highest referral commissions.
  • “House bots” owned by big retailers or platforms that purport to serve the user but are, in practice, optimising for margin.
  • Complex “credit recommendation engines” that reduce headline rates but embed value in optional add-ons that the bot does not treat as salient.

Surveys already show that many consumers suspect AI shopping tools will mostly help retailers rather than them.

This is where public policy and market design matter. Requirements for explainability, conflict-of-interest disclosures and user override controls will shape whether the typical household’s agent behaves more like a fiduciary or a sales rep.

For financial institutions, that means preparing not just for a world where bots attack existing junk fees, but also for regulatory scrutiny of how they design agents and integrate with third-party ones.

Strategic questions for banks, fintechs and merchants

If Levchin is directionally right, the next five years in consumer finance will not just be about who launches the most eye-catching AI features. It will be about who can survive in a world where:

  • Penalty-fee economics are structurally undermined. Late fees, overdrafts and other contingent charges become rare events rather than core revenue lines.
  • Pricing power shifts to agents. Bots that can instantly compare all-in costs and reliability push firms toward simpler, more transparent pricing.
  • Trust becomes a platform asset. Brands that convince consumers their agents are truly on their side may capture disproportionate share of delegated spending.

Executives should be asking themselves:

  1. What share of our profit comes from low-salience or contingent fees that a rational agent would reject?
  2. If an AI watched every one of our customer journeys, where would it flag dark patterns or avoidable surprises?
  3. Are we building products that an independent agent, given the data, would actively choose for a typical user? Or are we relying on inertia and confusion?
  4. What is our posture on third-party agents? Are we designing APIs, data feeds and dispute processes that make it easy for them to work with us—or hoping they never show up?

When the bot says no

None of this will happen overnight. Bain’s data suggests that, today, agentic AI accounts for less than 1 per cent of traffic at most retailers, even if AI-powered tools already drive up to a quarter of referral traffic in some cases.

Many consumers remain wary of letting software make purchases for them, and the legal framework around agentic commerce is still in its infancy.

But it does not take mass adoption to move the market. In credit and payments, the marginal customer is often the most valuable: prime or near-prime borrowers, high-spend households, small businesses with reliable cash flows. If those segments adopt AI guardians that never pay junk fees, the economics of designing products around those fees will start to break.

In that sense, agentic AI is less a new “channel” and more a new discipline. It forces financial products to pass a test that regulators have always struggled to enforce at scale: Would a well-informed, attentive customer, advised by a tireless expert, still accept this deal?

For banks, card issuers, BNPL providers and payment platforms, the message from Levchin’s bots is simple. If your business model only works when people are confused or inattentive, the AI will eventually say no.

The question is whether you redesign the model before it does.

Read Next