Take On Payments, a blog sponsored by the Retail Payments Risk Forum of the Federal Reserve Bank of Atlanta, is intended to foster dialogue on emerging risks in retail payment systems and enhance collaborative efforts to improve risk detection and mitigation. We encourage your active participation in Take on Payments and look forward to collaborating with you.
Comments are moderated and will not appear until the moderator has approved them.
Please submit appropriate comments. Inappropriate comments include content that is abusive, harassing, or threatening; obscene, vulgar, or profane; an attack of a personal nature; or overtly political.
In addition, no off-topic remarks or spam is permitted.
AI Is No Silver Bullet in Fighting Fraud
A sobering report just out from the Federal Trade Commission (FTC) explores the current limits of artificial intelligence (AI, variously referred to as machine learning, automated decision systems, natural language processing, expert systems, neural networks, thinking machines, and more) for preventing online harms, including scams, fake product reviews, romance fraud, money laundering, revenge porn, hate crimes, and counterfeit product sales.
The report makes clear that the use of AI to prevent online scams is "in its relative infancy" and that AI as a standalone tool is no silver bullet for eliminating disinformation from social media platforms, identifying cloaked offers of child pornography, and selling illegal products, among other ills.
The FTC's June 2022 report, Combatting Harms through Innovation, returns to the first principles of fraud prevention, which are useful to merchants and financial institutions fighting all types of online fraud, including payments fraud. In the nothing-new-under-the-sun-category, the report warns, "Greed, hate, sickness, violence, and manipulation are not technological creations, and technology will not rid society of them."
The report points out the trade-offs between increased use of AI to prevent harm and the likelihood that more surveillance could result in discrimination or censorship. With implications for the challenge merchants face in identifying potentially fraudulent actors while minimizing shopping cart abandonment, the report states, "Even with good intentions, use [of AI tools] can also lead to exacerbating harms via bias, discrimination, and censorship."
The report lists eight principles for applying AI and various automated tools. Here are three with particular importance for fighting payments fraud:
- Human intervention is vital. When using automated tools, humans can prevent the sorts of unintended consequences that—at their most extreme—went on with the computer Hal, who very nearly murdered his human handlers in 2001: A Space Odyssey.
- AI tools must be transparent to the people they affect. Merchants and financial institutions must be able to explain decisions to customers and potential customers.
- Businesses that use AI for decision making must be accountable for errors.
You can read the other five principles in the report, which is deeply skeptical of the wholesale application of this technology in its current state: "One caveat for consumer protection or competition enforcers, however, is that it makes little sense to use limited resources to obtain any AI tools without having already decided what exactly to do with them."
Another useful resource from ACAMS Today: "Your AI Cheat Sheet: Key Concepts in Common Sense Terms."
The payments industry has benefited greatly from new technology over the decades. Check imaging, contactless pay, online payments all come to mind. As these examples show, advances in technology can provide many benefits, and, as Hal reminds us, adoption of new tools must move forward with a careful eye to not only benefits but also risks. As always, Take On Payments will continue to report objectively on payments technology.