A sobering report just out from the Federal Trade Commission (FTC) explores the current limits of artificial intelligence (AI, variously referred to as machine learning, automated decision systems, natural language processing, expert systems, neural networks, thinking machines, and more) for preventing online harms, including scams, fake product reviews, romance fraud, money laundering, revenge porn, hate crimes, and counterfeit product sales.

The report makes clear that the use of AI to prevent online scams is "in its relative infancy" and that AI as a standalone tool is no silver bullet for eliminating disinformation from social media platforms, identifying cloaked offers of child pornography, and selling illegal products, among other ills.

The FTC's June 2022 report, Combatting Harms through InnovationOff-site link, returns to the first principles of fraud prevention, which are useful to merchants and financial institutions fighting all types of online fraud, including payments fraud. In the nothing-new-under-the-sun-category, the report warns, "Greed, hate, sickness, violence, and manipulation are not technological creations, and technology will not rid society of them."

The report points out the trade-offs between increased use of AI to prevent harm and the likelihood that more surveillance could result in discrimination or censorship. With implications for the challenge merchants face in identifying potentially fraudulent actors while minimizing shopping cart abandonment, the report states, "Even with good intentions, use [of AI tools] can also lead to exacerbating harms via bias, discrimination, and censorship."

The report lists eight principles for applying AI and various automated tools. Here are three with particular importance for fighting payments fraud:

  1. Human intervention is vital. When using automated tools, humans can prevent the sorts of unintended consequences that—at their most extreme—went on with the computer Hal, who very nearly murdered his human handlers in 2001: A Space OdysseyOff-site link.
  2. AI tools must be transparent to the people they affect. Merchants and financial institutions must be able to explain decisions to customers and potential customers.
  3. Businesses that use AI for decision making must be accountable for errors.

You can read the other five principles in the report, which is deeply skeptical of the wholesale application of this technology in its current state: "One caveat for consumer protection or competition enforcers, however, is that it makes little sense to use limited resources to obtain any AI tools without having already decided what exactly to do with them."

Another useful resource from ACAMS Today: "Your AI Cheat Sheet: Key Concepts in Common Sense TermsOff-site link."

The payments industry has benefited greatly from new technology over the decades. Check imaging, contactless pay, online payments all come to mind. As these examples show, advances in technology can provide many benefits, and, as Hal reminds us, adoption of new tools must move forward with a careful eye to not only benefits but also risks. As always, Take On Payments will continue to report objectively on payments technology.