Machine learning (ML) has the potential to change significantly the regulatory environment—for better and worse. This session discusses how ML is teaching financial firms to optimize their risk management and comply with regulation more effectively and at lower cost. The session also examines ways ML helps supervisors enforce existing regulation more effectively. Looking forward, what concerns should supervisors have about potential risks ML poses to the financial system? Can the current regulatory system manage these risks?
Transcript
Stacey Schreft: We thought we would start by talking about what each person was doing in their organization, what their organization was doing in the machine learning space. I'll pass it to Scott to start.
Scott Bauguess: Thank you. I'm Scott Bauguess, I'm actually a financial economist at the SEC [Securities and Exchange Commission]. I started my career—I spent six years working as an engineer in high tech before switching to finance. I never thought that I would come full circle and would come back to a lot of things that I started with when I was an electrical engineer. But it's indeed what's happened, it's very exciting. The past few years, in particular, I've witnessed a lot of rapid changes. It's really shook and or changed how we think about financial markets.
When I first started doing events like this or talking publicly about what the commission was doing with technology machine learning, in particular, it was about three years ago. Now, they didn't call it SupTech, now, I understand that what we're doing is called SupTech, and so I'm just going to go with that label, and supervisory technology seems to make sense. What I think I'll do now is just give you a flavor of what the SEC is doing, and then maybe go deeper when we get into the Q&A, if anybody has any particular interest on a subject.
Broadly, there are three areas where machine learning has affected the commission work. I don't want to say “artificial intelligence” because I still struggle with understanding what exactly that means. But, certainly, machine learning is something that's rapidly evolving the commission. One area is identifying fraud and central market misconduct, and we've actually made quite a bit of progress there, and the tools are extremely powerful in that vein.
Another area is reviewing and issuing disclosures. We have several hundred, maybe close to a thousand, accountants and attorneys that review registrant documents as they come in, filings that come in. Almost 500 different forms that we collect periodically, through the EDGAR filing system. We're now developing algorithms that help them do their review process more efficiently.
The last area is market risk assessment, which is understanding what, not necessarily market misconduct, but risks that exist in the market and finding methods of using new technology to identify them more quickly so that we can address them. In particular, interconnectedness and spillover risks when there are market failures at one entity where they might land with other entities. In the area of market misconduct and fraud, this is where the division I work in—the Economic and Risk Analysis Division—has done quite a bit of effort working with our Enforcement Division, and also our Compliance and Examination office.
One area where machine learning has brought applications is identifying which registrants, which market participants we're going to examine each year. If you think about a registered investment advisor, we have over 12,000 of them. We might be able to go and look at 10 percent per year. And so, trying to identify which 10 percent to look at is where pattern recognition and different algorithms can help point an examination staff member to one entity over another.
Another area is working with our enforcement special task force. We have something called the Complex Financial Instruments Group, a Market Abuse Unit, and an Asset Management Unit. And developing methods to look at hedge funds that are reporting in our office that are too good to be true. Looking at security base swap transactions that appear to be anomalous, particularly around suspicious events or potential insider trading. The algorithms we're using there are very simple, some are very sophisticated, and all of them have really advanced our ability to detect potential wrongdoing.
The third area, and it's an emerging area, and one that our current chairman is very passionate about in terms of retail investor protection, is in our Tips, Complaints, and Referral [TCR] database. This past year, we launched a new technology underlining how we receive TCRs in the market. A lot of our TCRs come in, and they're not very well described, in terms of the potential wrongdoing. That may be an elderly person in the Midwest who can't quite articulate which company it is that they might have a problem with, or the ticker symbol, or the type of allegation. Here, when they write a narrative disclosure, we can use techniques to collect information that may not be explicitly in the TCR or connecting to other TCRs. Some of that work that we're doing now is just emerging, and some of the tools we're using are quite innovative.
One area, in particular, is natural language processing, we've heard a lot about this on the first panel. I'd say about 10 years ago, we started experimenting with just simple word searches and using Perl scripts and regular expressions to identify emerging trends and registrant disclosures. The first example of this is, somebody said, "Well, could we have detected the AIG issue if we looked for the term credit fault swaps in registrant disclosures?" The answer is, no, we couldn't find a leading indicator of these disclosures.
But even if we had, there was an issue of what would we have done? We would have known to look for credit fault swaps or registrant disclosure to know whether it's a concern. This is where the next version of our natural language processing really evolved. And that was about four or five years ago, when we started using techniques called Latent Dirichlet Allocation, or LDA. I'm sure many of you are familiar with that.
And that's this unsupervised learning that was described in the first panel, where you can pick out latent trends. You may not know what the trend is, and once you put human eyes on it, you can say, "I understand why this is being picked out." We're now using unsupervised machine learning methods and combining it with other data from our examination programs, to see whether these latent trends can help us identify what might be referral to an enforcement staff or an investigation.
Let me finish with my opening remarks. Let me just say that there are a number of limitations that machine learning and also advanced technology that faces, at least we face at the commission. They fall in two buckets. One is in data limitations, and the other is interpretability of the algorithms. Both were touched on, also, in the first panel. The first ideal limitation is, and we focus a lot, and so the commission is trying to figure out ways to make disclosures, financial market disclosures machine readable.
Not just machine readable, machine readable in a structured format where you can collect metadata, data about the data, that helps the machine understand what it is it's reading. We've made a lot of progress. This effort started all the way back in 2003, when we had our first machine readable disclosure mandated, but there's still a long way to go. Most of the data that we get requires an exorbitant amount of resources, labor, contracting dollars to clean up and fix to make usable. And this is a persistent, pervasive problem that I think goes beyond just human capital issues or computing-environment issues, it'll probably be more enduring.
The second is, the historical information just isn't there yet. We have a lot of cross-sectional information is very powerful. But the history of it is not sufficiently long to be able to make predictions with a high degree of confidence. Even in our own programs and data that we're using internally, that we internally collect, we might be able to get back 10 years. But really, what you need is 70 years, or 80 years, or 100 years of data, particularly as you start looking for making predictions about future market movements.
There's also issues of latency of outcome variables. If you're Amazon and you're looking at sales and what people buy, "If you buy this, you may want to buy that." You've got full discovery of the information and all actions. But if you're looking for fraud in markets, you can only train a model on fraud that you've observed, you can't train on fraud that's unobservable. This creates a significant challenge in any area in which you don't have perfect data. Even if you do have perfect data, there's still an interpretability issue. Some of the algorithms that we have at the commission, going back four or five years ago, we're looking at earnings management methods to help identify which corporate issues perhaps may be too aggressive in that dimension. The models were too sophisticated for the investigative staff that were using them. Those where models with causal inference attached to them. When you look at machine learning algorithms, the black boxes, it's extremely difficult to get an attorney or an accountant or investigative staff to use that output in a meaningful way.
And so, trying to figure out ways, if you have a model, it's getting narrative disclosures, we're trying to build web GUIs [graphical user interfaces] that would say, "Based on the model that we've applied, we'll highlight text in a narrative disclosure, say, 'Here are the areas that are generating the risk signals.' " Then we let a human go back through it and say, "I understand why this is, that's not a risk." Or, "I don't understand why it is being highlighted, I should investigate it further."
Then I'll also just note—and I’ll end on this point—that many of the models that we're training are backwards looking. Either being trained on data that already exist, when you're trying to find new frauds or new issues in markets. This issue of over-training a model on past behaviors risks, missing something important in the future. One way we're trying to address this is by implementing a random component to our investigations or examinations so that we can test and discover new things that our models are telling us. But it's extremely difficult to convince an organization with limited resources to randomly go out there and look for issues. Because you're genuinely not going to find something, and you have to do a lot of random searches to get something that's actually relevant. I will stop there.
Schreft: Okay, thanks. I know, John, you want to take this financial stability angle to this.
John Schindler: Okay. Good morning everybody, my name is John Schindler, I'm with the Federal Reserve Board. I'm going to talk to you just very briefly about some work that I've done, taking a financial stability perspective on fintech broadly, but in particular on machine learning, through work at the Financial Stability Board. The Financial Stability Board has an organization called the Financial Innovation Network. It's been around since 2012, looks at all sorts of financial innovation, always from that financial stability perspective. When we did work on artificial intelligence machine learning last year, it was altered in a report that was published. It's available online, so I won't go through all the gory details.
I did want to share with you two anecdotes that illustrate the trade-off, or to the positive versus the negative machine learning that anybody in a financial authority is working on right now. The first story came from a meeting that we held. We had all sorts of private sector participants, financial institutions, nonfinancial institutions. At this meeting, we had a large online market platform, like an Amazon marketplace. Not a retailer, but a platform that was created where people could post whatever products that they wanted to sell. People could come and purchase those products, all sorts of bells and whistles on this platform. The seller ratings and all the stuff, and the recommended products. All the things that we've become accustomed to.
But this online platform was working with something new, and it was microinsurance. I go to this online platform and I want to buy something like a belt. I'm going to be at a conference, I’ve got to get a belt this weekend. I might be concerned that if I buy that particular belt, that it won't arrive in time, and I won't have the belt and my pants are going to slide down, okay? Or I've got a black suit, then they send me the wrong color belt, or the buckle falls off the minute I get it. When you're purchasing this product, you click, “Yup, I want this belt, this size," and I click select. The next screen it takes me to has a list of options for me to add insurance to this purchase. I can insure against it not being delivered on time, I can insure against it being sent the wrong color, all sorts of different options. And the price is there, and it's pennies, seven cents, 12 cents, it's very small microinsurance contract.
The thing that I thought was really interesting is, they have over a million transactions a day on this site. I actually believe they told us it was several multiples of that. Those contracts are not prepriced. So, when you select it and you put it in your cart and you clicked “purchase,” they have a model behind it. A machine learning model that takes that black belt, compares it to other similar products, looks at who the seller is, what the seller's reputation is. Where the seller is located, where you’re located, and prices that insurance for you using that model, in milliseconds. When you get to that next page and you see that list of prices, that was all calculated in the time that you clicked from one section to the next.
I think this illustrates a very positive side of machine learning. Here's a product that maybe we didn't even realize we need it, or maybe you don't think you need it. It's a microinsurance contract, but using these large datasets, using new techniques, we're now able to price contracts like this. Now, folks like Scott, myself, we aren't actually concerned with online retailer pricing, I'm pretty sure you're not concerned with that. But I think the idea, just the ability to price these things to come up with new products that you haven't thought of before, that is something that machine learning is offering.
This could be a new derivative contract. Maybe I haven't been able to price something, the volatility was too difficult to calculate. Maybe I can use proxy data, an illiquid contract. I can use cluster analysis to look at a different way, a different set of products, and I can price that. I think this is the promise that machine learning offers, and I think it's a very positive thing. If we can complete markets, give you the ability to hedge or offset risk in different ways, it's very positive.
At another meeting of this same group, we had the flip side of this. We had a fixed-income trader from a large international bank, and he told us that the dilemma he's faced was every day he received one terabyte of data. I can't even begin to comprehend what one terabyte of data looks like. He said, "There's no way traditional techniques can utilize this data, so we we've been working with machine learning for three years now. We've gotten some models that we're really comfortable with, and they tell us buy, sell, how we should position all this sort of thing." This is a meeting of the FSB [Financial Stability Board], so there's folks like Scott's colleagues, bank regulators, securities markets regulators around the room. We go to Q&A, the first question is, "Do you understand the model?" And people come along with further questions, "What if the model tells you to do X and you're not sure?"
When we turn back to this guy, his basic response was, "I've been using this model for about three years and I'm making a lot of money. I don't care if I can interpret the model, I don't care if I understand the model, I'm making money, okay?" That's the flip side of this. It's great that you can get a result, it's great that on average it's been making you profit, but is that enough? From a financial stability perspective, I would argue that's not enough. Shortly, after that, the session ended and there were a couple of us standing in the front of the room, and one of the other artificial intelligence firm representatives came up and he interrupted us. He said, "You need to hear what that guy is saying." He said, "My firm makes these programs and we offer them to the firms, and we often say, 'Why don't we come and sit down with you and show how you can take the model and understand its results?'"
He said, "Typically, the response is, 'The model works, right? Thanks for your help, got the model, see you later.' " Just this idea that the bottom line was the ultimate driver there, and that they wanted to know not so much how the model works but just, does it work? What's the improvement in performance? I set that up as just when a regulator or a supervisor is looking at it, I do see very positive things, these new products. The possibility of offering different ways of offsetting risk that you didn't have before.
Bauguess: It's like CDO-squared.
Schindler: CDO-squared.
Bauguess: ... when they made a lot of money, didn't understand why, but ...
Schindler: Exactly. Exactly. But there's this flip side, and I worry about whether we're moving too—some people are moving too fast in this area. That's, I think, the trade-off, that's what I wanted to highlight.
Schreft: Great. Bill, you're going to bring an industry perspective, and I think—teach us how you're teaching lesson, right?
Bill Lang: Yes. My name is Bill Lang, I spent about 21 years in the regulatory space. The last 13 of those is in supervision at the Federal Reserve Bank of Philadelphia. And joined Promontory in 2015, so I've been there for almost three years. Once I went to Promontory, IBM decided it was such a big value proposition that they would come and acquire Promontory about a year later. I'm going to talk about that a bit, in terms of what was the value proposition that IBM and Watson saw in moving into financial services, and particularly into the regulatory space. I'm going to take a few minutes on just that.
Watson, as a natural-language processor, really began as a research project in 2006. Everyone knows about the Jeopardy challenge and its success in being able to take natural language and discussion, and not turn it into what search engines do—a range from potential answers, but an actual answer, a precise answer. Which was a big step in what these tools could do. Since then, Watson has been used to first try to integrate its work into the health care industry. The idea here is really quite straightforward. There is a tremendous amount of information that is mostly in text about regulatory diagnoses, outcomes, information that collected in journals and papers and experiments in computers, which no individual physician could possibly keep up with, to any extent.
So isn't a machine able to organize, sift through that information, and present that information in a way that is digestible to a physician in order to make a diagnosis? This suggests the kinds of things that natural language processing and other aspects of machine learning are extremely good at and getting better at every day, which is organizing very unstructured information in a way that allows you to either organize the information so that a human can see, or even sometimes better make actual decisions and be able to move on that.
Interestingly enough, the next area that Watson moved into, in terms of industry, was financial services. Watson Financial Services started up in 2014, and then IBM acquired Promontory in 2016. Now, for those of you who don't know about Promontory, I'll talk about a little more in a minute. But we have specialized in bringing together industry and regulatory knowledge to help companies deal with their financial regulatory challenges and strategy challenges and risk challenges. The concept here was that Promontory was the kind of organization that could be used to train Watson on financial regulatory and other risk-management matters. I'll get into that more, in a moment.
As the slide represents, increasingly, Watson is being involved across both financial and nonfinancial firms in a variety of ways. So what's the idea here? The idea, really, is that cognitive systems can unlock a partnership between expertise and the abilities of machines to process information and language. This triangle here is an interesting one, in terms of, we have the risk analytics aspect of the machine, we have human expertise, and then we have something called cognitive inside. And both the machine and humans have a role to play. That role can be different, depending on the circumstance. The standard idea is, the human is involved in training Watson, in terms of understanding whether or not it's getting the right answer. Is it answering the right question, if you would. And so that's more of an input into the machine.
Then you can think of the output as obtaining an output, going back to humans to continue to train. Then the question is, how much does the machine then learn and take over. That could be to 100 percent degree, that could be 80 percent, that could be 70 percent. It depends on the circumstances and the exact problem you're going to do. Now, in some ways, this is not a new idea. I did a lot of work in the retail credit space. The idea of how much could the machine, through credit scoring and other means, give you answers, as to whether or not you could safely give credit or you should deny credit, has been around for a long time. If you think of that problem, there were certain aspects which the machine was very, very good at. Actually, better than humans.
For those of us who were around long enough, we know there was tremendous resistance on the part of a lot of credit shops to give, cede, authority to the machine. If you actually went in and look at the exceptions that a lot of firms made to the machine and overrides, they did a lot worse than the machines. It took a long time for people to accept the fact that for certain decisions the machine was just better. Just absolutely better at doing than humans.
But then there was always a range of decision, usually on the boundaries, where human knowledge was very, very important, and where the machine's ability to discern. The machine was great at telling you who was a really good credit, the machine was really great at telling you who is a really bad credit. Then in the middle, it got a little hairy, and humans did play a role. You can think of these developments as expanding the space in which machines can provide information, and provide confidence, and provide decision making.
I'm not going to go through this slide very much. This is really, a lot of the other presenters said, this is happening. There is lots of investment going on in machine learning, in practical applications. So since we covered that in the previous one, I won't spend time talking about that. I do want to talk about what Promontory's role is and why IBM thought it was a good match. Our expertise is bringing together senior regulators, senior business people, to solve problems with whether it be on risk management strategy or regulatory compliance. The belief was that we would be the right organization to help train Watson, particularly in the regulatory compliance area.
They take us all down to Watson, on Astor Place. They hook us to little machines and we, I don't know exactly what they do, but by the end of it, all I know is that you have a preference for blue shirts, and whenever somebody mentions the city of Toronto, you get very nervous. But more seriously, we are involved very much in providing the training to Watson, both at the front end, what are the important questions? As well as the back end, as Watson is developing answers, whether or not those are the correct answers. This is how Watson gets trained and learns.
Just to look at the kinds of things we're doing. The blue spaces are areas where we're working heavily with Watson, currently, in terms of developing solutions for customers. I'm going to emphasize some areas that are really, I think, are very, very near term. They're already happening and I think breakthroughs are going to be very, very substantial in the near term. Some of it was already mentioned. The areas of financial crime, anti–money laundering [AML], et cetera, is a perfect example where the kinds of things that machine learning can bring are very important.
I'll put those to two really important things that machine learning brings to the table. One is, and a lot of discussion's already happened, the ability to find patterns in very large and, sometimes, not even such large data sets that are not as obvious. These are methodologies, which, to some extent, neural networks have been around. They really are advances in techniques that large-scale and improved computer capability have evolved, and we are seeing that already. If you look at the way in which, let's say anti–money laundering or financial crimes have been done in the past, being able to spot very, very complex patterns in the date have been very, very difficult and turn out to provide lots of very useful results.
But the other way in which machine learning is advancing our capabilities is the fact that machine learning also allows you to expand what becomes useful in organized information. To some extent, in my view, that is the bigger and more revolutionary breakthrough. A lot of us in the financial literature know the discussion about using hard data and soft data. That is, hard data being things you can put on a spreadsheet, you can build a model on, et cetera. Soft data being human experience, knowledge, information a local bank might have about its customers.
Well, essentially, the growing abilities of natural language processing, visual processing, and other things that machine learning is bringing is allowing us to greatly expand what was soft information into harder information. Now, the downside of that, the last panel discussed, which is that's good if the information you have using is good. And so, some of the risks here, I think, are the fact that as more and more information is used off of nontraditional data sources, you can imagine the proliferation of…I'll call it fake data that can cause a great deal of problems. We certainly have worried about this from the point of view of fake news and things, with the prior election. But certainly, the ability of individuals to cause a certain amount of havoc in the information that exists out in the world is something to be concerned about. With that, I'll stop and we'll go to questions.
Schreft: Great. I'm going to ask some questions and mix in some of yours, and later in the end of the session, go more directly to yours. We heard about models that are so complicated that people can't use them or where people don't want to know how they work to understand them and fake data. Let's start with the model risk, and how do we identify and manage that as we become more dependent on these models? Should regulators be requiring that algorithms be tested to a greater ... I'm not necessarily sure I know exactly to what extent we are testing, requiring that now. So, maybe you could weigh in on that, for people like me who don't know that. But should we be requiring that, to a greater extent? I don't know who would like to…
Lang: I'll be happy to start. The other panelists will talk about what regulators are requiring. I used to do that. This is, in some sense, not a new problem. There have been latent factor models around, for quite a long time. We're explaining what was driving the model was not necessarily easy to do. It depends somewhat on the model purposes, and that goes to your point. Which is, if the only point you're worried about is whether or not it is producing short-run revenue, then you probably have the...you're not looking at the right question.
I think there will be need for a good bit of transparency, about code and algorithms, for there to be acceptability in the regulatory space. Then when you get to the question of fairness and equity, that's going to be a very interesting one, because there are two ways you...I mean, one way you can look at it is, the way in which we do that right now isn't all that obvious either. That is, people look at decisions that are being made and don't know exactly why those decisions were made, how they were made, et cetera. Having an algorithm which is not easy to pick out factors, I agree with prior panelists, in some ways it’s better. You can actually know exactly what the algorithm is doing. You don't necessarily know everything about what people are doing, and what's driving their decisions.
Schindler: I'll throw some more anecdotes out there to get this moving. At another one of these meetings, we had a firm that uses machine learning to identify timing of cyberattacks, and I thought it was very interesting. They didn't share their secret sauce, they don't ... what they said is, they have a greater predictive power than the typical methods. Now, if somebody told me that they had hired this firm as a third-party service provider, and they were using them as an input into their cybersecurity, I think I'd be okay with that.
Even if they didn't understand the model output, so long as they said, "Okay, they told us, "There's an increased risk today, so we're monitoring a little bit more carefully, but we've got other things." If they told me they've got a good model so they said, "There's a 0 percent chance of a cyberattack today, so we shut down all our cybersecurity," I'd be a little bit more concerned. I think that the need for understanding these models depends on its purpose, right? If it's going to be the only thing you're using, then, yeah, I think you got to understand this. If it's going to be an input, then I guess I'm less concerned.
I think it also depends on—obviously, my perspective being financial stability systemic risk—if it's something where somebody is building up a position that has the potential to be a London Whale or something like that. If there's a systemic risk element to it, I think I want some narrative behind that. Why I'm building this position, as opposed to some computers telling me to "buy, buy, buy," something like that. In that case, I think you've got to be able to tell the story a little bit. But if it doesn't have that systemic risk element, maybe I'm less concerned. If it's just one input, maybe I'm less concerned.
I just had one more point, which is, when you're thinking about financial stability, you're thinking about tail risks, you're thinking about things that don't happen very often. I think you have to have a story. I can't walk into the office and say, "Well, today there's a 6 percent chance of a financial stability issue arising in the next six months," and then walk out. There has to be a narrative behind that: "The reason there's a 6 percent chance today is X, Y, and Z. And the reason that's higher than yesterday is because of A, B, and C." There's got to be a story there. If there were no story, how does the policy maker take action? How does the person who oversees the model output take action, if there is no story?
Bauguess: I'll just say, at the SEC, it's really a two-dimensional issue. One is, we have our own models, and so we have model risk within how we conduct our jobs, and then we're also supervising registrants who have their models. To give you an example, an investment adviser may roll out a robo-advising tool. They have a fiduciary responsibility to their clients. They can't abdicate that responsibility because of a model, so there's an expectation that they understand what goes on inside that model. Same for trading firm, proprietary trading model. We do examinations where we will—and it gets very challenging at times—when we ask for the registrant to turn over their code or we can look at it, because we observe behavior that we do not understand, and it's a risk. That can be something, an avenue that we go down.
On the flip side, at the agency, a lot of our programs, in our risk assessment programs are four or five years old now, that use these techniques. Each year we do an evaluation of how well they did in identifying examination targets or investigations. Each year, we have to set a governance program for if we're going to make a model change, why are we changing, and who's making the change, documents in it. I would just say that, the first couple years, I don't know that we had a very robust internal controls program over this, and it was only after we saw that they were successful, that we had to be mindful that any time we incorporate a change in a model that it's well understood and there's rationale. And that there was a decision making process that went into it.
Schindler: Stacey, can I just build on something Scott just said?
Schreft: Sure.
Schindler: One of the things that we encountered, especially with ... I'll stereotype, it is the smaller fintech firms, you said you look at the model, you can't just say, "Well, this is what the model said," you have to have some responsibility there. Having responsibility, understanding the model, I think some people interpret it differently. So, we encountered this one firm that said, "Oh, our models are completely interpretable." And they talked about how they could run a report and show which variable contributed to the increase in the probability.
They said, "We can print this report out for hundreds and hundreds of variables." I feel like getting a printout that tells me that, "This variable contributed 0.003 percent in this," still isn't understanding. I encountered this with a very small firm, I felt that they were pretty naïve. I think what it means to understand the model and the output, I think is an issue that we're going to grapple with.
Schreft: To get to the financial stability angle a bit, how do we deal with the fact that sometimes, an individual firm's algorithm might be fine, but that their interaction could be a problem? Is that something that we'd want to test in sandboxes, or how would we get at that?
Schindler: I'll start. It's a very challenging question, and I don't have a great answer for that. I think, number one, as supervisors, regulators, I think we need to get much more up to speed on this issue. Unlike other areas of fintech, where distributed ledger and blockchain, these are things that have potential use cases. This is technology that has been used in the industry for years now, in some cases. I don't know if we completely understand the dynamics that it's creating.
So, getting more up to speed on what it is, being able to sit down with a firm and asking them if they understand their model, how would that model interact with another firm's model? They always say, "Oh, no, no, our model is better. Our model doesn't interact." They always ignore that potential interaction, it's a very micro view, trying to introduce some macro perspective on, if you're using the same models, you're using different models, how will those things interact?
I think we're at the very early stages of being able to think about this, from a financial stability perspective. Fortunately, I think the use is still small enough, compared to the total set of transactions and positions that I don't think it's a systemic risk at present. But I do think it's, for all the reasons discussed already today, I think it's going to be increasingly used. I think we have to be ready for what interactions might arise.
Bauguess: Maybe a good example was in 2015, that the Third Avenue Focused [Credit] fund failed, and it was an alternative investment fund and had a lot of fixed-income type securities in it. Because of the fragility of that fund, one of the initial concerns that we had at the SEC is, well, could there be contagion? Are there other funds that, because they look like this fund, could experience a run on its fund or induce a risk that otherwise shouldn't be present? We had used some machine learning methods and cluster analysis to look at where did Third Avenue, where did the focused funds sit and who were its nearest peers? It was a hard fund to really characterize, because they had some very unique investment strategies.
As it turned out, it showed up as a fund that maybe wouldn't have emerged as a narrative disclosure, given how the machine looked at it. But it also showed up an anomaly, where there weren't any nearest neighbors that seemed to be at risk. It turned out that, fortunately, there was not a spillover effect from that fund. But this is one area where we have a lot of concern over funds that may be in technical compliance with all the rules. But underlying, being in technical compliance is significant for agility in their advancement strategy, which could induce a market risk.
Then, on the sandbox side of this, and the SEC is much different than a potential regulator, in terms of how it regulates. We're not a merit regulator, we're a transparency regulator, and so, how we think about sandboxes is more along the lines of exempt of relief. If there's an activity in the market participants want to engage in, for instance, 20 years ago, creating an ETF, we give exempted relief from a rule that would prohibit it, to allow its adoption. Then we'll observe how it progresses. Then, after we observe it, maybe codify that into the rules. That's our approach to thinking about a sandbox.
Lang: I would separate out things like trading models, not a major trading, from some other kinds of models. But I do think this issue is something that the Federal Reserve, in terms of macroprudential policy should be thinking about, which is, how to test and simulate some of these models and see how correlated things become. I think that will become an increasingly more important thing for regulators to look at. When I think about stress testing, whether it be liquidity stress testing, capital stress testing, you can imagine being able to run simulations, to see what kind of actual behavior the more automated models represent.
The second thing, which I think macroprudential regulation will need to look at, is how to use these techniques to help identify network interactions and systemic connections, in a way that presumably it could do. I mean, these techniques are well suited for looking at those kinds of things.
I'll just make one last point: when you do that—and this certainly would have been true of the last crisis—both modelers and the firms often look just at the issue of prediction and how well does a model predict, et cetera. Sometimes, regulators, I think, also look too much at that. They may look at confidence intervals, and they may look at how confident you are about the model, but there's a different question, which I think is really important. Which is, what is the model telling us about what's going on that's important that we don't know about?
The example that was given before, fraud is going to evolve, financial crime is going to involve, it's not going to be stationary. It's almost, by definition, a tautology that big risks typically are spaces in which whatever model or whatever technique you're using, things are not showing up. That's how they become big, et cetera. Given that tautology, you should be using these models to say what's big that's going on that we need other tools—either more data or other information, investigative techniques, et cetera. That human interaction between different types of expertise and different types of decision making, I think, is very crucial, from a financial stability point of view.
Schindler: Can I just jump in one more—
Schreft: Yeah.
Schindler: One extra point. Just thinking about the existing sandboxes, they're designed to look at new products. Machine learning itself is not a product. You could build a new product out of it, you could build a new model. But at least, the existing sandboxes are not set up just to look at new algorithms or models. I mean, for one reason, it's just the human resource constraint that a lot of the supervisors, they just don't have the technology. We heard earlier today, how expensive it is the get the top learning expert, that sort of thing.
I think existing sandboxes, and there's about 15 or 20 around the world right now, they could deal with a new product that is based on this. But whether they would be able to get at the algorithm...if I built this new insurance contract and said, "Look at it." "Okay, I'll give you some regulatory relief to play with it," I think they could do that. But I still think they wouldn't be able to look at the underlying machine learning. I think, at least the way they're structured now, the sandboxes are not going to be most effective tool for looking at the underlying machine learning techniques. I think that right now, the only way would be for the supervisor to get involved. But I'm happy to hear other thoughts on that. I'd be curious.
Schreft: So, let's see. A related question—and I'm going to build on this one—is asking, what's the potential for machine learning to stabilize or destabilize markets? What steps can regulators do to minimize destabilizing effects? We've talked about the unintended consequences as these algorithms can interact. You could have potential undesirable effects just from synchronized firm behavior as people are training on the same data sets using either off the shelf, or software that's really not that different as it's built for the machine learning. So with regard to this question and a related question about, can we use machine learning to predict and prevent the next financial crisis? Or are we just going to predict past crises that have the patterns and the data? Who wants to start?
Schindler: I'll—
Lang: Oh, go ahead.
Bauguess: I'll just say, if you look at trading markets today, a lot of the HFTs, high frequency traders are using very sophisticated algorithms. I don't know the nature of those algorithms. But I know that, you saw this in the last presentation, they're executing decisions in milliseconds or microseconds. What you see in markets today, particularly, is a lot of mini-flash crashes. There are, I think, examples of decisions that are being made very quickly by machines that take some time to incorporate and sometimes liquidity evaporates. You combine that with human decisions, particularly with humans that put in stop loss orders to protect themselves against a loss in the security.
They’re away playing golf, and the machine makes a decision, and it busts through a bunch of stop loss orders. It creates momentum, something called momentum ignition, and it'll flow down. It steals liquidity, then some other liquidity provider will say, "Hey, nice opportunity to come back in." And markets recover. But I think this is a manifestation of machines executing and humans catching up, and regulators trying to figure out what to do to prevent that sort of probation that could be perceived as an investor protection issue. Thought to highlight that as a potential issue, but not necessarily what the commission might do about it.
Schreft: Okay.
Lang: I think those are really good points about financial market prices stability in those areas. I'll just talk a little bit more about the issue of systemic risk from a financial system point of view. The question about, are we just going to learn—we just have data from the past, and we won't know what the future will hold and whether it'll be the same pattern. I think that's a really good point, and also goes to the point of some people say, "Well, if only we had 500 years of data, then we would know.” I actually don't think that would solve the problem. Because you'd only have 500 years of past data, and past data is revealing, but new developments occur. Depending on your outlook, you could have said, CDO-squareds were new, or you could have said they were a manifestation of all kind of behavior. That's a little bit philosophical, but more practically speaking, there was a lack of understanding of what information people needed to understand at the time, because these were new developments. One of the great things that machine learning can bring, and one of the bad sides is, machine learning is a fabulous tool at understanding patterns that ought to be disturbing or unusual. I think, in that sense, it brings a great deal to the table, in terms of helping people thwart financial systemic risks. The other side of it is that that only works if you understand that the predictive accuracy of these models by themselves are going to have problems, because they're going to be subject to the problem you just mentioned. So my personal belief is that until we learn how to have these models help us ask the right questions and bring different tools along with the modeling themselves, we're subject to a false sense of security from better predictions.
Bauguess: In some ways, there is a trade-off inefficiency in these destabilizing forces. Because it's very clear, for example, at HFTs are bringing price efficiency and improvement in the market at 99 percent of the time. But a consequence of that is you have these periods where dislocations in prices are occurring, because of different speeds of decision making that are taking place.
Schindler: Let me take the stabilizing portion of this. I'm not used to talking in a positive sense, when you talk about financial stability, you're almost talking about the negative things. But the FSB report on our official intelligence goes through a list of benefits, so let me just highlight one or two there. One is just, if we're using new data or bigger data sets and we can better price risk, if we can offer new products to allow you to hedge a particular risk that you weren't able to hedge before, there's a potential benefit there. It could reduce uncertainty; it could reduce potential systemic risk. So I offer that as one possibility.
The second one, I think, is maybe more controversial, and I'm not sure I agree with it myself. But if I can now price risk differently, if I can use different data sets and hence maybe I can give credit to somebody who previously didn't get credit—they didn't have a credit score because they didn't have a bank account, they didn't have a credit history—but now I can use various data sources I haven't used. I can look at their online buying history, I can look their social media scores, stuff like this, which firms are using to evaluate their credit worthiness, maybe I can now offer credit to people who didn't have it before, so I can boost financial inclusion. There have actually been entire conferences on how financial inclusion, broadening financial inclusion, could increase financial stability. If machine learning could help boost financial inclusion, perhaps it could also boost financial stability, gives the financial markets and the financial system as a whole a wider variety of people, assuming that risk is appropriately priced.
Bauguess: John, just a question for you. In this new data—
Schindler: I'm terrified by you asking me a question.
Schreft: We didn't set ground rules for panelists questions of each other, but [laughter]…
Schindler: I got one ready for you, Scott.
Bauguess: On these new data sources, it speaks more to a willingness to repay versus an ability to repay. And so when you think about applying credit or to new parts of the market, is it because it's deriving insights that typical credit scoring can't capture?
Schindler: Well, so, the one that I thought was very intuitive for me was new start-up firms, and you don't know if they're worthy of credit or not. But there are some firms out there that are now evaluating credit by looking at things like, how many purchases or sales do they have on Amazon or Amazon Marketplace. They ask for the UPS shipping history, so you can see how many inputs are they taking in, and how many inputs are they taking out. That, to me, does strike me as okay. If they've got a lot of inputs coming in, and they're shipping stuff out afterwards, sounds like stuff is going in, and that's good creditworthiness. If you're using social media to look at how many likes you have on different things, I guess there's some information there. But what else they evaluate from that, I don't know. But some of it was very intuitive to me, how they would use it. Others, not so much.
Schreft: Let's go to a related question. How are Watson and other suppliers of artificial intelligence, machine learning, grappling with the financial sector and financial regulators’ need for audit trails? Is it possible to audit the black box? Related to the financial stability angle, do we want to be requiring some preservation of algorithms so we could audit down the road?
Lang: This is a very good question, and there really are two aspects to the black box. One is, the transparency of the algorithms themselves. That's really more of a...I mean, the algorithms exist, they can be made transparent. That really is a matter of trading off the issue of proprietary information versus the need for transparency. Those trade-offs often need to be made and will continue to be made. And obviously, there is a desire to keep some of the proprietary aspects of the information confidential. Regulators have their needs for transparency, and those trade-offs need to be made appropriately. But that's not a new problem. The algorithm themselves exist, and they can be provided, in that sense.
The other aspect of the black box is how interpretable is the information from the algorithms for the kinds of things that a regulator might care about. If it's in a fair lending context, what aspects or what factors are driving the model? There, you get into more difficult aspects. If it's for stress testing, what are the financial economic factors that are driving the model? Which is of great importance to regulators, both because they want to have transparency about the model but also part of the use of the stress test is to understand where the vulnerabilities are of a particular institution, and the financial system. That level of transparency does become more of a problem, just by the nature of the models themselves. They don't lend themselves easily. They're much more latent factor neural network type approaches that don't lend themselves easily to, "Here's what's driving it. Here is the economic factor that's driving it or the demographic factor that's driving it."
Schindler: If I could just touch on that?
Schreft: Sure.
Schindler: Is it possible to audit the black box? I think it's really important there. Speaking globally, FSB members, most of them have the authority to ask a bank to, "Show me your model." Or, I'll assume, ask the securities firm, "Show us your model." But is it possible, from a human resources perspective, like you've raised—you can send me 10,000 lines of code, do I have anybody who can interpret that and follow that? The FSB, in one of its reports on fintech, highlighted the human resource constraint that a lot of supervisors and regulators face. We might know that there is a potential issue, but the ability to look at it and understand it in a way that we could use to make meaningful supervisory decisions is just not there yet.
Bauguess: I think it's probably true that regulators—I'm trying to emphasize—are better at reconstructing events and they are predicting them. So, when something goes wrong, looking at what went wrong and investigating with the resources, is more justifiable, because you need to have an answer. It's extremely hard to justify resources for something that hasn't gone wrong. And so I would say that sending three of your top experts to a firm for a week to look through a piece of code is something that may cost you three exams of a regular registrant. Are you willing to make that trade-off? I just made those numbers up, but that's the idea, the type of trade-off the regulator would need to make.
Schindler: It's like science fiction. Philip K. Dick had this story, where they looked into the future to see who was going to commit crimes, and then they stop them before they committed the crime. They made a movie out of this called Minority Report, sounds like the same thing.
Lang: It's, well—
Bauguess: That's the SEC.
Lang: Certainly, the prudential banking regulators see it as their job to prevent the crimes, so to speak, in advance. Their job is to do both.
Schindler: To lower the probability of the crime.
Lang: Yeah, to lower the probability of the crime, that's a good way of putting it. That's where you get into what's the best use of resources, and your ability to do that. You are going to run into these resource constraints, and is the best use of your resources hiring, putting all of your money into people who can investigate code versus other things? That's a good question. I'll just say that there's a lot of what I'll call investigative aspects of prudential supervision, which really have to be integrated with some of these more technical modeling aspects of that process. Doing that well, I think, has a bigger payoff.
Schreft: So, let's pivot to talking about data, since we've spent a lot of time talking about the algorithms and the models. It's been said that you're better off having more and better data then a better model, other things equal. How do we ensure that we have high-quality data? The related question from the audience is, how can firms and regulators guard against fake data corrupting machine learning? I think it was you, Bill, who raised the issue of fake data.
Bauguess: I will say, oh, just let me start, I do think that good data is better than a good model. Because you can draw incredibly valuable insights with very simple analytical methods if you have good data. If you have bad data, you can spend a lot of resources trying to make the data better, but it's very costly and oftentimes it doesn't work. From a financial, or from a securities market regulator perspective, we spend a lot of times—when we write a rule and have a form with maybe disclosure, there's a lot of thought behind the data requesting. Many times, the questions that are asked are meant to understand causal inference.
If you ask a good question, it's reported accurately, that helps a model, particularly a black box model, make decisions, because there was intent behind that question. But it's extremely frustrating, from a data analyst perspective, to pull that information that comes in into a model, because it's not often machine readable. Or when it is, there's a confusion between electronically accessible and machine readable. You'll think if I send you an electronic copy of data, that it must be machine readable. But I can tell you, you can send us a PDF document and it be completely impenetrable to a machine if you do a few things to it. And so, the amount time and resources you spend cleaning data is tremendous.
I think it's also true that most market participants want good data on everybody else in the market and are less willing to provide it themselves. You face this problem of getting market participants to comply with the spirit of a rule and not just the letter of the rule. That's a challenge in giving feedback. It's a very squishy area. They're technically complying, but they're complying in a way it's not really all that useful. The advanced analytics in AI/ML is really going to drive the need for market participants to better report their information. So, fortunately, at least from my point of view, the market regulator is going to get to step out of the way a little bit, I think, going forward, when money market participants report accurately, because business models are going to depend increasingly on them doing so.
Lang: I did speak to this a little bit before. I think this is going to be one of the more crucial questions we have, in terms of what is going to be the quality of data requirements, particularly from a regulatory perspective, to be able to be used for decisions that affect the economy, that affect individuals. Whether it be from a safety and soundness point of view, systemic risk point of view, or from an equity point of view. I mean, we already have in what we consider probably harder data than what a lot of people are talking about, with the credit bureau data, all sorts of issues that have arisen about fairness and equity, et cetera, in terms of problems with that data, and relative to scraping things off the web, that's a relatively well-oiled machine, compared to some of the data sources that people are talking about using. I think this is one of the more important regulatory questions, in terms of what are going to be their requirements from a regulatory standpoint in terms of quality of data and uses of data. Now, personally, I think that should be slow to evolve, because we don't know where this market is going. Rushing in without good knowledge is probably a mistake, but it is going to be one of the more important areas to tackle.
Schindler: I haven't thought so much about the fake data issue, but the quality of the data has been emphasized, and I had just a couple of anecdotes. I'm sure people have read about some of the self-driving cars and the training that they do for these cars. I remember reading one article where this self-driving car algorithm actually tended to speed much more frequently than they thought it should. Then they looked into some of their drivers and their drivers were actually speeding as part of their training set of these cars. The drivers would speed, and so would these cars.
If you think about, what if you've never trained a self-driving car about what a school bus is. I mean, there are different rules for a school bus, so you hope that they've been trained to know what those rules are. A similar thing for finance might be a financial crisis. A financial crisis is a different beast, and you hope that within your training set there is data that talks about how financial crises evolve. FSB met with a firm that was doing credit risk decisions. They're using these massive data sets, but the data sets were limited. They only went back like 2012, 2013, and they were assessing credit risks. The question was, were you assessing credit risk over a pretty darn calm period of time? What happens when the next run-up to a crisis comes into place? You don't have data reflecting how things performed in that period. So is that quality data? I want good data. Fake data, that's really going to be bad. But even if you don't have fake data, just having good quality data that spans the possible scenarios you're going to face, I think, is really important. More important.
Lang: I do want to make a little point on the other side of this, which is, there is this human tendency to always run the race the wrong way. Think about automated vehicles, self-driving cars. If there's anything that goes wrong with a self-driving car and there's an accident, people get really, really upset. But then we have to compare that to 40,000 deaths on the highways in the U.S. in 2017. Well, when you put it that way, that makes some sense. Those are the comparisons we should be making. I'm not, by the way, saying self-automated vehicles or, here, there's lots of systemic problems you have to worry about.
But there is this human trait to compare perfection to what the machine does, as if human decisions are somehow perfect in their absence. When you should be comparing, can self-driving cars do better than 40,000 deaths a year, and that sort of thing. I think that's also true for, we're going to have to be careful not to...because mistakes will be made, hold things driven by new algorithms and new techniques to a standard that we don't hold other decision-making apparatuses to.
Schindler: But isn't it also human ... I agree with what you're saying, it's like a human nature element to it, right?
Lang: Yeah.
Schindler: I mean, when there's a flash crash, we want to be able to go back and understand exactly how this happened. It's the same with a car crash, plane crash. In the future, if it's all run by machine algorithms that we don't understand and something bad happens, if we can't explain it, do we just go, "Oh, well, it happens less frequently than it used to." Or, look, human nature goes, "Boy, we've got to understand this." We just can't. It's that desire to understand is just part of our nature, I feel like.
Schreft: Let's talk about dependence on data. As we become more dependent on data, could we find ourselves with a particular data set or sets becoming so essential that they become single points of failure? And targets for fraud or corruption. Is there a role? If that is a possibility, is there a role for some regulatory involvement to address that coordination problem in preventing a single point of failure there?
Bauguess: Well, I'll start with saying that at the SEC, we have data that comes in from SRO, self-regulatory organizations, we generate our own data, we have data from forms and filings. Then a large part of our data comes from commercial vendors. It's probably not a secret that some commercial vendors don't necessarily want to give a regulator their data. We've had instances in the past where we've used commercial data that's been very helpful, from our perspective, in rooting out misconduct. Then have the terms of use change, to the point where we can no longer use the data or we've lost the privilege. From our perspective, the public private partnership aspect of data, it's far more efficient for private market than it is to process large quantities of data and resell it. We can buy data at a fraction of the cost it would take for us to prepare and clean it ourselves. And so we like to do that and save taxpayers money. But if it results in a situation where the market has access to information and we don't have it, that is...and this is happened, this is a big worry or consideration for us.
Schreft: Or the price goes three times and would eat up your whole budget.
Bauguess: That too.
Schreft: You’re seeing a lot of that.
Schindler: I'll mention a slightly different take on that. Not just data sources, but potentially vendors becoming important. That firm that I mentioned that did this cyberattack probability or the timing—they claimed when they met with us that they had already sold this product to 20 banks, or something like that, which I think really meant they had met with 20 banks and talked about it, or something like that [laughter]. If that's really—
Bauguess: You just hope.
Schindler: I hope, yeah. If that were the case, I mean, you could imagine a systemic risk arising just because everybody's relying on the exact same model created by the exact same vendor. Whether it's data or the models, I think it becomes more important at this point in time. And this issue of third-party dependency, the potential for new systemic players to arise is something that the FSB wrote about in a report about a year ago.
Bauguess: What you're saying is that, half the model should stop using GDP, just to make sure that it relies on it?
Schindler: That's one type of data dependency, yeah.
Schreft: By popular demand, we have a question for Bill. Could Bill give some actual real-world example case studies where Watson's artificial intelligence has improved bank processes?
Lang: I'll give you one which...well, I want to be careful about proprietary stuff. But in AML work, and some of the AML work that we've been doing, we've been looking at new kinds of pattern recognition to look at the interrelationships among different factors. A lot of the AML approach has been to look at specific risk factors, more or less one at a time. Trying to look for patterns, which are particularly difficult in risky areas, where there aren't large numbers of actual bad actors. But the risk is very, very high when those actors are very bad. We have had a lot of success of using machine learning neural network techniques at uncovering some important risk factors that—not risk factors, risk clusters that were not observable from ordinary techniques.
The other area that we've seen a lot of positive use cases has been enabled to look at sales conduct. Basically, by looking at voice recognition we can actually look at, and text recognition, we can actually look at huge volumes of emails, phone calls, what's happening when people call into the complaint centers, what's happening when they're calling into the debt collection, or the debt collectors are calling to the customers. And look for whether or not the behavior is as expected, or whether or not we are seeing any patterns in that data. I think those have been areas where you can see the natural value of machine learning really comes into play. Because it's able to take unstructured data, interpret it in a very meaningful way and organize it in a meaningful way, so that people who need to manage and oversee those functions and regulators can understand what's going on. I'll just stop there and not make it a sales pitch.
Schreft: Okay. We also have a question that I assume is for John, about the Fed. This one's asking, given the existence of the model review requirements in the application, how can banks obtain timely approval of their models when you can't explain the core functions? I may phrase it a little differently to, given, as we've talked about, how you can't really dig into that. What steps are you taking to think about how to integrate these kinds of models into the review process?
Schindler: Well, I mean, I want to call on people in the audience, because I see some folks from banks sup and reg. I'm looking at a few right now. This is not really an answer that I can give. I'm not in the bank sup side, so I don't look at the bank models. I would suggest that right now, I have to say that human resource constraint that I mentioned exists for everybody, it's not just the Fed. I mean, I don't know how many data scientist Scott has on call right now. I know that when we spoke with some other FSB members, they're all struggling with, how do they, from a human resource perspective, assess these models? And it's a challenge. I didn't hear, from my colleagues abroad, who I have spoken with about this, that it takes them any longer to look at these models. It's just that they're more opaque and have a greater challenge in terms of what to do with them. But unless you let me call on people in the audience, I'm going to punt on this question.
Lang: Maybe I shouldn't, given my current role, but I did work a great deal on the model validation functions at the Federal Reserve when I was there. First, in terms of the question, I think the core functions of the models can be explained. So, I'm not to quibble with—I get the point of the question, but just to be clear, the core functions of these models, in fact, can be explained. What's more difficult to explain is just like pretty much any latent factor model, is what is driving the factors? In some cases, you can do that, more or less. But the more complicated the models become, the harder and harder that becomes to do. In terms of tying the factors to economic or demographic things.
Having said that, you certainly can look at whether or not...whether or not the resources are there is a different question. But you certainly can look at both the predictive accuracy of the model, whether or not the model is...the sensitivities of the model to changes and adjustments and factors. There are things you can do to provide validation of these models, and they should be validated in order to be used. It will take some development of techniques. It may take some additional skill sets that are in short supply. But I think this is a quite solvable problem.
Schreft: Let's talk about the labor end of this, since we've shifted there. On net, do you see RegTech—I guess there is both the RegTech and the SupTech end of the industry, and the regulators. Do you see this move towards machine learning, and it should be improving compliance and reducing labor costs of that? That's one of the things that's been said of it. Do you expect that our net to be reducing staff and actually saving costs, or just shifting staff to dealing with the machines? I guess we've already addressed the issue that the regulators are dealing with that. But we may as well start with Bill, from the industry perspective.
Lang: I do think, net-net, it will end up being able to do a lot of the compliance functions—not all of them, but a lot of them—much more efficiently and with less labor. I mean, the amount of labor that is now put into...if we took in, let's say a BSA/AML function, in essentially looking through and gathering information, and then manually processing that is very, very large. I think the techniques we're talking about here have the aspect of being able to improve the quality of that function, because I think machines have certain advantages in many of those areas, and to reduce labor costs. Having said that, you will need a strong, high-quality compliance function that accurately interprets and uses the information that is being generated through that process. So net-net, I think it's going to undoubtedly be a cost saving in terms of labor. But you will have to put in more resources into higher-skilled labor, in terms of that. I think overall it'll be a reduction in overall cost.
Schindler: I think the potential change in RegTech, SupTech area, over the next five years, is really incredible. There's a lot of—I mean, the term didn't even exist a year or two ago, I am guessing now. Just so many firms, I think it's like an untapped market. They never even thought of, "Well, let's go into the supervisor and see how we can organize their data better for them," that sort of thing. I think the potential improvement, over the next five years, is pretty stark. Just to see some examples, you can go to the Bank of England's website, they have their fintech accelerator. And every quarter or every six months, they post a list of questions or things that they're looking for help with at the bank. They invite fintechs to come in and offer solutions and actually a list of the solutions that they've come up with. It's really interesting stuff there. A lot of it on data and model organization, that sort of things. I think there's a lot of potential. I hope that doesn't cut us out of the labor force, you know?
Bauguess: I do feel like onthe first panel, it looks a whole lot like the internet boom, where everybody knew that the internet was going to be a game changer, but it really took 15 years for it to seamlessly integrate into our daily lives. I feel like we're in the same place right now. We're perhaps overinvesting, not knowing what the true applications of the marathon will be. It's going to take some time to adjust, and we've seen this over and over again in financial innovation or innovation, in general.
I mean, fiber optics in the '70s—how long did it take for us to actually get to benefit from fiber optics? Twenty years, 25 years. What is the latency here? I don't know. But three years ago, I could not have predicted where we are today, and I'm pretty sure I'm not going to know where we are three years from now. But I'm somewhat confident that it's going to take a long time. My one prediction is that AI will arrive about the same time the world runs out of oil. It's probably going to be a while.
Schreft: We have time for a couple of more questions. Let's turn to one of the existential ones, the old man versus machine. I learned last night that we probably don't need humans to identify a cat, but we will need them to identify chairs. That didn't make me feel very good. What about, in the scope of regulation, what can machine learning do better than human regulators alone? Or what you expect it to, since we're still early in this? And where will humans still add the most value? Is there a point where we've got too much automation? Actually, that's one of our questions here—whether there's a danger that we become too wedded to the machines.
Schindler: I think it's just the opposite. I think, we, as regulators, are slow to use the machines. Maybe, eventually, we will become wedded to them. I think it's going to take a long time for this transition to take place.
Bauguess: There's a convergence that's naturally taking place. We have some really talented supervisory investigative staff at the agency. Then we have 20 years of experience, they could read a financial document, and in 20 minutes say, "I know what's going on here." Trying to complement that, take their knowledge and put it into an ML algorithm, let them use that to use it in their job takes a lot of time. I can give you some examples of where it's working.
Now, for example, if a tip or a complaint comes in and it alleges a certain action, and we have a tool sitting right next to it, somebody’s triaging that. The complaint says, "This firm has a liquidity risk that they're misrepresenting in their financial statement." And then we have a tool right next to it that says, "They're in the bottom 10 percent of liquidity at their firm," yet they're saying that they're fine. Then, "Here's a financial disclosure violation, I'm going to go off and investigate further." But without that tool sitting next to them—will they actually go and investigate that? Because the amount and time required to look at it is so extensive. Here, you have a convergence of technologies. I equate it to Sigourney Weaver on Alien, who put on that body armor. That's what we're trying to do with humans: to fight off the aliens, give them some tools to make their job more efficient.
Lang: I find this a very interesting question, and I'm going to be in danger of being a two-handed economist in terms of this, because in a way, I'm on both sides of this. I spent a lot of my career in the supervisory space, trying to convince people they needed to make greater use and more effective use of quantitative analytics. I'm a very, very strong believer that advances in quantitative analytics and their use is critical, from a supervisory point of view. That's my one hand, and I think that's very important.
The other is that models and risk models of all sorts are tools that are a tool in the toolkit. Sometimes, quantitative people—to a hammer, everything is a nail, that kind of thing. They look very much at just what models themselves will produce and what they have to produce. At least, it's my view, with some experience, that models should be seen as part of a decision process that includes—sometimes they can be the decision process. More often, and particularly when you're talking about major risks to the financial system, those are precisely going to rise because the models didn't pick them up. If the models pick them up, somebody probably would have done something about them.
The use of other techniques—it could be increased data, increased information, it could be the use of more investigative techniques that supervisors are very familiar with, in combination with models—is very important. In particular, I'll say models and machine learning, I think, is particularly adept at helping you understand where the anomalies are and where the questions are, and where you should put your investigative resources. I think, if you do that right, it can be extremely powerful. The history of doing that right is not all that good, but I believe that really is my other hand speaking.
Schreft: One last question, just to wrap up. Looking forward, what advances in machine learning are you most looking forward to? I'll start with you, Bill.
Lang: Well, it's exactly the one: the ability of machine learning to pick up anomalous patterns that are going on in the data, and be able to understand patterns that are just not easily discernible is really phenomenal. We are finding that where we bring these techniques to problems and where we have sufficient data, you find things that you just are not easily intuitive. I think that has tremendous implications from a prudential standard for looking at areas of systemic risk. What is happening in the economy that is unusual, anomalous, that could represent potential risk. Tremendous ability, tremendous potential advances in understanding network behavior. I think, also, tremendous potential for the continued democratization of financial services and financial credit. And so, I think those are the big potential positives that I see.
Schreft: John, you want to add anything?
Schindler: I agree with what Bill said. A lot of that's exactly what I would have said. Making it more general, I would say that there are a lot of data sources out there that we, as supervisors, have, that are so massive that we've only scratched the surface of taking advantage of. Either because we've got teams of dedicated economists who take one slice of the data where they reduce the dimensionality in some way. But things like networks, they're inherently multidimensional—you can't reduce the dimensionality if you want to get the real full effect there. So the ability to take advantage of these datasets, whether they be bank supervision data, payment systems, a variety of things. The tools might be there in a few years for us to more comfortably assess the whole situation, as opposed to slicing the data in one way and saying, "Well, here's how it looks when I take this particular slice."
Bauguess: I'll just add that I want to continue to see a theory going into the practice, and more examples of applications of ML that are tangible, that we can then use and reapply to new settings.
Schreft: It should be exciting to come back in a few years, or year after year, at this conference, and compare notes, and see how things are developing.