When the latest news in natural language processing (NLP) hits the newspaper comicsOff-site link on a Sunday, you know you've got a phenomenon on your hands. Perhaps you, like me, are asking yourself some questions: What the heck is ChatGTPOff-site link? What does it mean for payments? How can I think about the risks? And what new ideas will the capabilities of NLP inspire?

What is ChatGPT? Like a lot of people in the past few weeks, I asked ChatGPT to tell me. The answer: "ChatGPT is a large language model that has been trained to generate human-like text. It can be used for a variety of natural language processing tasks such as language translation, question answering, and text generation."

Let's unpack this answer. "Large" means that the model is trained on vast amounts of data—that is, text created by humans. A "language model" is designed to understand written or spoken text. "Generate" means create content, which is a key capability to think about in the context of payments. Large language models like this one, using a massive amount of computing power and human training, are taught to pretend to be human in responding to written or spoken text.

How successful is this charade? A lot depends on the questions you ask and how you ask them. Your human input is still important. When you give the model a prompt, you are "programming" it to give you a list of Alfred Hitchcock's most famous movies or the ingredients for coq au vin. When you "program" a search engine by asking such a question, you see a list with links (that is, sources for the information). When you program a natural language model, you get sentences and no source for the information. The lack of sourcing is a critical distinction when it comes to assessing accuracy or bias.

Setting accuracy aside, the answers I got sounded human enough to me, maybe a bit stilted. Let's look at the opportunities and risks for payments.

Opportunity. Generative AI has the potential to make customers feel like they are chatting with a person when they are interacting with a bot. For customers like me, that could cut down on trudging through FAQs to get an answer—or even a hint to an answer—depending, of course, on how well trained the bot is. Chatbots could become more responsive to me personally.
Risk. Generative AI has the potential to enable fraud. New tech = new fraud, as we learned with new tech for making remote paymentsOff-site link. The ability to create plausible content and mimic human conversation is chilling in the context of phishing—for example. ChatGPT already can pretend to be an ATMOff-site link information screen.
Opportunity. Generative AI has the potential to prevent fraud. NLP tools can find patterns in data, perhaps leading them to detect fraud created with these very same tools. We've seen this pattern before in payments, with innovations in fraud followed by innovations in fraud prevention and detection, et cetera, et cetera, et cetera. As previously pointed out by the Federal Trade Commission, however, AI is no silver bullet in fighting fraud.

When I asked the model, "What practices are most important to prevent payments fraud?," I got an error message. Too complicated? Too dependent on common sense? Too speculative? Therefore, without AI assistance, here are this earthling's thoughts about ways to prevent payments fraud in the era of generative AI:

  • Keep your tech and tools up to date.
  • Share informationOff-site link across the payments industry.
  • Educate employees and end users.
  • Use dual controls when possible.
  • Practice password hygiene.
  • Always keep an eye out for The Next Big Thing.

To learn more, check out two podcasts I found informative: