Randal K. Quarles, a member of the Federal Reserve System's Board of Governors and the Board's vice chairman for supervision, discuss the potential impact of machine learning on the supervision and regulation of financial institutions.
Transcript
Raphael Bostic: In the next couple days, we're going to talk about machine learning and artificial intelligence. This is actually a nontraditional topic for a central bank or bank regulation, but in today's world, I think it is one of the most important things we can possibly be talking about. This is our present and it will certainly be a lot of our future. For the next two days, you're going to be involved in a lot of conversation around, what is it? What are the benefits that it can give us? What are some of the risks that we might be exposed to? That second part is the part that I worry about the most because if something goes wrong, we're going to get called on to testify, and I'm trying to avoid that as much as I can.
From thinking about this, it should really help us think about, what do we need to do and how do we need to position ourselves to be able to manage the coming environment in banking and in financial services? I'm really looking forward to this. This is something that I'm looking forward to learning a lot about because I don't know as much as I probably should, and I'm really excited that we have such a great program. I'll say a little bit more about this tomorrow. But for now, I want to just talk about what we're going to do right now.
We have a great opening session tonight. We are very pleased, I'm very pleased to have Randal Quarles here with us. Randy is the ... he is the vice chair in charge of supervision at the Board of Governors and one of the few people who's been in the Fed System less time than I have been. We're going to try to go through this. He is very experienced in the private equity space. He's founded one, The Cynosure Group, and before that he worked in the Carlyle Group. He was appointed in October and so getting up to speed very quickly. There's a lot to get to know. Before that, he was undersecretary of the Treasury Department and has been involved in policy and finance for a great long time. Please join me in welcoming Randal Quarles. [applause from audience]
I wanted to just have a short conversation about machine learning and AI. In part, the reason it's so important for us is that we do our job by managing information. This machine learning, the introduction of technology, it can change the way that we're able to manage that information and work with it and can allow us to be more efficient and have some new insights. I wanted to just get your thoughts on how should we be thinking about this? We'll start at a very high level. If we think about some of the benefits and some of the risks that we might have from this space, what are the things that jump out to you in terms of the interesting aspects of these areas?
Randal Quarles: I guess I'd say ... I'd say at the outset, before getting into kind of specific implications, that I've said in the context of regulation and supervision that I think there's a strong public interest, not just in the safety and soundness of the financial sector, but in the efficiency of the financial sector. While I think most people view that as a somehow my code for reducing regulation of the financial sector, it's actually—
Bostic: They really think that? [laughter]
Quarles: It's actually, I think, much broader than regulation and supervision. I mean, I do think that there is a public interest in the efficiency of the sector and so our ... I began in looking at questions about financial technology, artificial intelligence, machine learning from the point of view that we ought to be, to the extent that those tools can improve the efficiency of the sector, that we ought to be promoting that efficiency in the same way that we promote safety and soundness. Then, getting a little more granular, you say, "Okay, well, what are the implications of these tools?" I think you can ... I think it depends on implications for whom?
There are implications for customers, there are implications for the firms themselves, there are implications for the system. Both the implications and the regulatory response, I think, are a little different for each. With respect to customers, obviously, some of the ability of machine learning to take into account, to see relationships and patterns and types of data that might not currently be being taken into account in the decisions around credit extension, could result in the ability of financial firms to extend credit to a broader variety of customers, people who otherwise might have been unbankable.
With respect to the firms, again, the techniques of machine learning and particularly the ability to see patterns in large amounts of data that might not otherwise have been discernible using traditional principles and so forth, could allow a firm to make better, safer decisions about the provision of insurance or the extension of credit. With respect to the system, again, the ability—this is a little bit beyond where we are currently—but you could see the ability to look at patterns of data across the broad global financial system and see where there were correlations developing that might not otherwise have been developed, might not have been possible without big data analysis to see, again, could help financial stability.
The flip side of each of those is that there are also risks to customers, and firms, and the system. The same techniques that could expand credit availability might contract it because you're looking at particular factors. One of the issues about machine learning is that you can't ... it's focused on prediction and not infrequently, you can't be perfectly sure why it is that a particular machine learning tool has made the prediction that it has, even if it's predictive ability is quite good. Depending on what's buried in that set of algorithms, even if it's quite a good prediction, it could be excluding categories of customers from the ability to receive financial products.
The famous example of Watson, when asked to name a city in the United ... beating Ken Jennings at Jeopardy, right? Everyone knows that example—he did beat Ken Jennings at Jeopardy, but in the process when asked to name the U.S. city that had two airports, one named for a World War II hero and one named for a World War II battle, gave the answer "Toronto." To the extent that Toronto is buried somewhere in what is otherwise a very successful tool, there could be significant risk for a firm that might not be discernible using typical management or supervisory practices and for the system as a whole, obviously, to the extent that you have tools that are making these connections, sometimes invisible connections in broad ways. You can develop fragilities, systemic fragilities, possible global systemic fragilities that you aren't really aware of. That's sort of the framework in which we begin to look at each of these issues.
Bostic: Well, I have actually follow ups in each of those areas. You started talking about Toronto.
Quarles: Yes.
Bostic: I love Toronto, so I know this is on film, so no one should take this the wrong way. But, I do worry about, in a machine learning context, how do you monitor what the machines are learning to make sure we're okay with the lessons that they're getting and how they incorporate that into a decision-making framework? Do you have thoughts on what we should be looking for or what questions we should be asking of the folks that are putting together these programs?
Quarles: At one level, the regulatory framework isn't that different, right? Which is, depending on the purpose that particular tool is being used for, whether that tool is a traditional tool, whether it's an AI machine learning tool, whether it's a purple panda, we have rules about that that, you, the institution, need to understand how the tool works, you need to understand what its output is going to be. We monitor that output. In some cases, where in order to avoid perhaps stepping too much on innovation, there are mechanisms that we have used with existing technology, not necessarily. Maybe you could call it a version of machine learning, but early versions of machine learning. The SEC after the flash crash, for example, and after earlier crashes, established systems of circuit breakers.
Instead of saying, "We need to understand exactly how these trading tools will operate in every circumstance and why, and be able to predict how they're going to operate in every circumstance," if they move outside of certain ranges, then we're going to stop for a moment so we can figure out what's going on. That has allowed post-flash crash, and recently with the volatility in February, for example, in part the system had evolved as a result of the frequent circuit breaker impositions in August of 2015. The system had evolved in a way that was much smoother. I think that there are ways for us using traditional regulatory approaches to deal with some of these new tools as well.
Bostic: That's interesting. I was going to ask about the flash crash. I'll let you off on that. I do wonder about what a flash crash mechanism, a protection mechanism, might look like in a banking system where we have 6,000 different institutions that might be in very different places and very different spaces. The nice thing for the SEC is that you have a couple large exchanges, if you can manage those, you're good to go. I think the challenge for us might be a little more significant in trying to figure out a model that can work across the various circumstances. That's interesting. Another thing I've been thinking about is the diversity in models.
Quarles: Mm-hmm.
Bostic: Do we really have any idea about how diverse the models are? Because to the extent the models are all the same, or they're based from the same root, then they all might be missing the same things. They all might be making the same kinds of assumptions, and so if something were to go wrong, they all would wind up in the same space. How do you think about that? Is that something that you worry about very much?
Quarles: It is. I would say that the early indication ... in the current situation, that doesn't seem to be happening. I think that there are reasons to expect that it would be slow to evolve—that kind of all correlations all moving to one—would be slow to evolve in this space. The machine learning tools are being put to ... there are a variety of different algorithms currently, right? Not every bank is using the same algorithms. The algorithms are being used by institutions that have different risk references. They're designed to effect those risk preferences, if you will—"effect" with an "e." At least in the current environment, it would seem unlikely that you have kind of a dynamic that is leading to that kind of convergence.
Now, that doesn't mean that that can't happen in the future. It could perhaps happen quickly, perhaps as tools are developed that are clearly superior in how they operate and so they're adopted by a broader variety of firms. I think that is something that we need to watch carefully. Again, I think that our existing supervisory methods for looking at models, looking at how firms use technology will allow us to keep a handle on whether something like that's evolving.
Bostic: You mentioned the system and market structure. I wanted to just get your thoughts on the third-party vendors and sort of nonbank participants in the market place. Most bank institutions, this isn't part and parcel of what they've been doing so they don't often have that expertise in house. Talk a little bit about the role that you see third-party vendors playing in the evolution of the marketplace and maybe a little bit about how you think the Fed—or banking regulators, more broadly—should be interacting with them and engaging?
Quarles: That's an interesting question. I completely agree that you're right. Third-party vendors will have an important role in this space. The datasets that you need for this type of analysis frequently are larger than any individual bank will have. That's one of the interesting things about this, actually, is that technology firms that haven't traditional been financial services firms are able to see predictive patterns in sets of data that have not been available to financial firms. The computing power that's necessary, the resources that are necessary, are obviously...there are economic reasons to have those concentrated in third-party vendors. I think that we will see that be an important element in what's happening here.
Now, we have a whole regulatory regime for governing the interaction of regulated financial firms with third-party vendors, depending on the type of interaction, depending on the degree of reliance. We look at how they structure that relationship. We can, in the right case, go in and examine the third-party vendor itself for how it provides certain services to financial firms, limited examination authority there. Again, at one step, you just treat this as another example of a third-party vendor, a particularly important example, but one that you have a regime for handling.
I think that it gets more interesting—and this may not have been where you were going with this question—but when the third party-vendor decides to become a competitor—discovering, in fact, that it has a wealth of data that isn't available to the financial firm—that basically has been the competitive advantage of financial firms in providing financial products is the information that they have about their customers. Now, there are all kinds of firms that have certainly very broad information about those same customers of a different sort, in some cases, much more granular, much more extensive and there are, some believe, may be more predictive, or at least equally predictive.
It won't take long for those firms to realize, "Well, we can do something with this." The competitive advantage in providing these products that traditionally has put us in the role of vendor to the financial institution can allow us to be competitive in the financial institution. Then, what's the right regulatory response? Because legally, our legal structure for having a regulatory hook on an entity has generally not depended on the types of products it's providing a customer, except for certain types of consumer protection. But, for the overall panoply of regulation, that has depended on how it funds itself.
Is it taking deposits from the customer in a way that the customer expects to get them back immediately? That creates the whole systemic issues that have justified in the ground for the existing systemic regulation. You can imagine these data firms providing some of these financial services in ways that don't have that traditional regulatory hook. So then, what's our response? I think we're in the very early stages of thinking through what that ought to be.
Bostic: That wasn't where I was going to go with that question, but that was going to be my follow-up so you hit it anyway. I actually think that the things that you raised were exactly on point. It raises a question of, what is the basis for a regulatory authority on some level? Is it going to be...it's funny. We're having this regulatory conversation and reform conversation in a lot of context, so the Community Reinvestment Act was based on the fact that most banking was done through a branch. A lot of our regulatory banking infrastructure was based on this deposit function. The world has evolved to where deposits, per se, are not as important. Branches, per se, are not as important. We've got to rethink our entire regulatory infrastructure. This is an important case, and technology is really driving this in an important way. I think this is on the frontier of what we need to think about and have ideas about. It’s an opportunity for everyone to weigh in.
As we go through the next couple of days, I'll be curious to get your thoughts on it and see how they evolve, but this is something that will be very present for us moving forward. We've talked about system, we've talked about the customer a little bit, and I do worry about customers because there's a lot that's going to be going on that's kind of behind the veil. They'll just get an answer and it will be, I think, difficult for bank institutions to actually tell people where these decisions came from without some engagement. Given that black box space, what do you think we should be telling banks? What is the role of the regulator in trying to promote more transparency as these kind of things get introduced in a more consistent basis across a wide range of institutions?
Quarles: I think that's an interesting question. If you go back to the beginning, particularly, if you begin from the view of, we want to encourage, or at least we want to promote, we certainly don't want to stand in the way of. I think it's actually our duty to promote, given the public interest in the efficiency in the system, technological innovation that will enhance that efficiency. These machine learning tools can do that, but by their very nature, they do it in a way...the whole point is that it's difficult to explain. The whole point is that you are developing computational tools that come closer to the ability of humans to, as Michael Polanyi said, "To know more than they can tell."
In the past, that was always the advantage of the human versus the computer and was going to be the sort of enduring and eternal advantage as to why computers would not replace us, which is that there was an entire category of tasks that required knowing more than we could tell. Judgments that humans could make that were really difficult to explain, according to a set of articulable rules, but obviously correct and proving correct over time. The whole point of machine learning is that these tools also develop the ability to know more than they can tell. They are able to identify predictive patterns out of large sets of data without a clear statement as to causality.
If we were to insist that you can't use that tool, unless you can explain the causal relationship, then we're essentially undermining the whole point of the development of these tools, which is that you're creating the ability for the computer to know more than it can tell, just like a human. That's why I look at regulatory responses that are more along the lines of the SEC circuit breaker as a potential type of response as opposed to saying, Show us where in your algorithm that this result, the root of this result—which as I understand, it can be very difficult for some of these algorithms—again, undermining at a conceptual level of the point, to say, "If this tool is leading to a results that is outside certain predetermined bounds, then let's stop for a minute and determine if we're comfortable with that continuing to happen." So, circuit-breaker type of responses, as opposed to kind of preconceived explanatory responses, I think are appropriate.
Bostic: I hear that. I think there are two challenges that we're going to face in this space. One is, what defines, or who gets to define, what outside the bounds is, and what those boundaries look like? The second is that we have a bunch of compliance requirements in our system that we're going to be able to answer to those, and make sure that as decisions come through, they fall within the bounds of our regulatory compliance space. If they're difficult to explain how it came, it's going to be difficult to explain that why we should think that's compliant. We're going to have to come up with answers to both of those.
I think we're very early in the process of having these conversations, so this is exactly the venue to start to wrestle with those things is quite interesting. I wanted to close with two more questions, and they're going to be inward-looking to the Fed and then we'll go to questions from the audience, so get your questions ready. The first is, do you think the Fed should be using machine learning? How should we be thinking about incorporating this into what we do, in terms of stress testing or in terms of assessing the viability or safety and soundness of institutions? Tell us a little bit more of what you're thinking about that.
Quarles: We do use some machine learning tools currently, with regard to back-testing and model validation in connection with stress test. We use some natural-language processing tools as part of the supervisory relationship with large institutions looking through emails and so forth to reveal certain relationships. More speculatively, there's a lot of discussion about the regulation of culture that I have always been traditionally a little skeptical of our regulatory ability to engage. The culture of any institution is the most important thing for its managers. It’s difficult for regulators to specify how you engage with that.
But, there are a variety of interesting—again, still nascent, but interesting—machine learning tools, natural-language processing tools that allow managers to get a handle on where there are paths of influence within a very large organization that might not otherwise have been available through examination of patterns of communication, meetings, phone calls, emails, etc., that might be at least some way to get some visibility into that sort of amorphous thing called culture. Where I don't think, at least in the near term, that you would see a lot of use for machine learning is with respect to economic prediction, the FOMC [Federal Open Market Committee] meeting and kind of looking at the advances in the economy. They're just isn't enough data.
There have been very few...in order for at least existent machine learning tools to be able to look over historical data and be able to determine what patterns would result in recessions—well, their quarterly data over 60 years is not that much data, and then you only have a handful of recessions over the course of that 60 years. It's not enough for tools like that to really tell us anything that we aren't currently able to sort of define. In that sense, I think we have job security.
Bostic: I'm grateful for that prediction. I hope it's right. The last thing I wanted to do is just ask about this space in the context of monetary policy and economic performance at the national level. The introduction of technology, machine learning, AI [artificial intelligence] allows for far more efficient processing of information, faster decision making, and on some level, it reduces a demand for labor. That has implications for how the economy is going to perform and sort of what we should be doing as policy makers. I'm curious as to your thoughts on that whole dynamic in general and the implications you think it has for the type of things we need to be paying attention and trying to collect information on and ultimately the direction of our policy.
Quarles: I think that's interesting. For what it's worth, I am, and I think that it's reasonable to be, an optimist with respect to the likelihood that these new technological advances will prove more disruptive, will replace more human labor than similarly dramatic technological advances have in the past. History, at least, would say that the Luddite concern of these looms are going to replace the need for human labor have always been wrong. Now, that doesn't mean they will be wrong again, and obviously there are reasons to think particularly with the ability of some of these tools to involve very little human interaction because in the past the need to monitor, develop, tend, superintend the new technology has been the source of the new human labor, for the most part. In many cases, these tools require less and less human interaction as they do their thing.
An example that I just find fascinating, maybe it's familiar to everybody in the room, but is AlphaGo and AlphaGo Zero. As most people probably know, because it was in the news, AlphaGo is the machine learning program that was superintended learning that basically learned how to play the game Go, which is much harder than chess, as most of you know, much more difficult for a computer to solve. Last year, for the first time, beat the world champion, three out of four times, AlphaGo. Kind of a big, at least, psychic milestone in the development of these kinds of programs. AlphaGo learned how to play Go from looking of reams of historical data of Go games that had been played by humans. So, it was essentially taught by humans the strategies to play the game because one of the issues about Go is that it so complex that it's hard to just brute force your way through the way that—
Bostic: There's some abstraction in it.
Quarles: Yes. That was just last year. AlphaGo Zero is a program that was developed to replace it, that did not use any historical data at all. Essentially, the programmers simply taught it the rules of Go and then left it alone. Recently, it played a series of 100 games with AlphaGo and beat it 100 to zero.
Bostic: That makes you optimistic? [laughter]
Quarles: So, that is a reason for caution. I mean, it's entirely possible that it's only a year from now that Skynet becomes self-aware [laughter]. Nonetheless, notwithstanding that, history would say that there is kind of an irreducible element of judgment to much human activity that can't be replaced, even by these very advanced machine learning tools. Again, to take the example that all of you will be familiar with, which is the difficulty of teaching a machine to recognize a cat. If you do it according to rules, it's very hard to come up with the rules for what is a cat. If it's missing whiskers, the machine thinks it's a petunia. But with the machine learning tools, you're not trying to come up with rules for a cat, and now machine learning tools have allowed computers to become reasonably decent about recognizing cats, but not perfect, or recognizing chairs. Ultimately, what make something a chair is that it was created by a human for the purpose of being sat on. That's something that the machine is not going to be able to judge simply from comparing many, many different examples of chairs, and that sort of role for humans in determining the purposes and direction of human activity is not going to be replaceable, I believe.
Bostic: You just got super-philosophical there. When you take these topics and you extend them, you go exactly to this space. There's some fundamental questions about what it is that humans do? Why are we here? What should our relationship be with these tools that are now becoming extremely powerful and are evolving in our midst as we live? I think these are deep questions that we will continue to wrestle with certainly over the next two days, but these are questions that will be here with us for many, many years. Let's go to questions. Are there questions in the audience? Please wait for a mic to come and then please introduce yourself, and please make it a question.
Doug Elliott: Don't worry, this one actually is a question. It's Doug Elliott from Oliver Wyman. Hello, Randy.
Quarles: Hi.
Elliott: This may be too early to ask you this, but given that you've described how important these changes will be, what are your plans at this point for changing how the Fed approaches these issues? Making sure you have the right people, the right expertise, the right data, the right organization?
Quarles: It is early in our process for deciding how it is that we're going to approach them. Right now, we are paying a lot of attention to them. We're paying a lot of attention to them within our existing framework, kind of as I indicated. So that, to the extent that you have a machine learning tool that is interacting with customers, we want to ensure that the traditional customer protections that we would expect if it were a human are being complied with by the institution. If it's evaluating risk, if it's trading, we apply our existing rules as we would, again, if this were something that was being accomplished by more traditional mechanisms. I do think that because of some of the issues that we've talked about here and that you'll be talking about over the next couple of days that we will need to have more specialized responses over time and probably over a relatively short period of time, but we don't have a framework for developing them yet.
Bostic: Other questions? I see a hand up here.
Constance Hunter: This was very interesting. Constance Hunter from KPMG. As I've listened to you talk and I think about this issue, it makes me think about the difference between critical state theory and chaos theory. Chaos theory, having boundaries, critical state theory, having no boundaries. It seems like the real world is much more like critical state theory. How do you think about regulation in so much as it seems like the real world is more like critical state theory than chaos theory?
Quarles: Well, that's a large question. I think that I would go back to kind of the principle that I articulated at the outset, which is precisely because the systems that we're dealing with are so complex in the real world and these new tools for analyzing, seeing patterns, making predictions are dealing with such complexity. I am reluctant for the regulatory response to be that we need to put bounds around that, bounds in the sense of content of that can be, as opposed to milestones or checkpoints, if you will, in the application of some of these tools so that, again, if they are reaching results that cause us to stop and pause, we stop and pause, and consider. As opposed to knowing exactly what's going to happen before anything begins because, again, I think that that is putting too much limitation on innovation.
Bostic: Other questions?
Julia Coronado: There was a second question here.
Bostic: Oh, second question here? The table is monopolizing this space, go ahead.
Coronado: Julia Coronado, MacroPolicy Perspectives. We've seen a lot of the benefits of technology and big data and machine learning, and what it can bring in terms of efficiencies across a lot of industries. But what's come more to the forefront lately is privacy issues because sometimes that exploration and those efficiencies come with unknown exploration of people's privacy. Do you see it as part of the supervisory responsibility of the Fed to ensure the privacy of customers?
Quarles: The decisions about privacy and who owns the data, and what can be done with the data once it's determined who owns it, who owns the use of the data, and those are slightly different questions—both kind of legally, regulatorily, philosophically. I don't think that it's our role as regulators or supervisors, absent some instruction,which we haven't received, to make those decisions. Those are decisions that ought to be made in the political realm. Once they have been, obviously, then it will be up to us to implement whatever rules are decided on in a democratically accountable space just as we enforce other laws. When there are clear decisions about the privacy implications of some of these tools, then obviously it will be our role to enforce those decisions, but not to make them independently of that democratic process.
Bostic: One here in the front.
Rick Lundwire: Hi. Rick Lundwire from BNP Paribas. I want to go back to your point. You talked about macroeconomics and the idea that our jobs weren't at stake here. Just on that point, when we think about what's happening in markets at the beginning of this year, and we think about all the Fed is looking at the macro indicators. The lagged component of that, whether we're looking at employment, GDP, inflation, whatever, and then we start to think about what the financial markets are telling us about how the dynamic is changing, and the signal that they're giving to certainly traders and other folks in markets. The Fed, of course, is not really having a grasp of what that really means. If we start to think about some of these indicators, even just prices, there are a ton of prices out there that are available. There's certainly a lot of big data, large data sets that we can look at. There is machine learning that could be done to get a better sense of the turning of the cycle, whether or not it accurately predicts recessions or not, that's a bigger question. I think that, can it give a bit more doubt? Can it change the decisions in monetary policy?
Quarles: I think that there are uses that can be put to that kind of analysis. I think that that can help inform our decisions. I just think that, again, the degree...well, there is, obviously, a lot of data about prices. There's a lot of macroeconomic data. The volume of data relative to the types of decisions that the Fed or other macroeconomic forecasters have to make is relatively small compared to the vast amounts of data that are used for some of these other purposes of machine learning. I think that it can be useful, but you won't get the kind of extremely robust—two things about it you won't get: the kind of (a), extremely robust and (b), unexpected predictions out of the macroeconomic data that's available that you get in some of these other areas.
Bostic: I think we have time for one more and we're going to go way in the back, to the right here.
Carlos Arella: Thank you. Good evening. This is Carlos Arella. You talked about risk and the evolution of the algorithms for machine learning. Let's assume for a second that indeed we are able to develop all of these algorithms and that the ultimate determination is that the risk is us, that the risk is the human.
Quarles: Skynet is becoming self-aware.
Arella: Exactly.
Bostic: Or the Matrix, or the Terminator. We have many examples that we are aware of.
Arella: Is it that it can get away from us in this way? That the machine could learn to play Go, can learn to play chess, can learn to work the financial market? Then, that the determination of the risk is that humans shouldn't be allowed to play Go, shouldn't be allowed to play chess, or shouldn't be allowed to run the economy.
Bostic: Do you want to take that? [laughter] As the moderator, it's my job to jump in front of those sort of questions. Look, I think that that is a question for a philosophy department and if we were on a college campus, I think there would probably be months and several courses on this question. But, I think that, for me, one of the biggest challenges and I'm curious as to your thoughts on this is whether we want this to happen or not, whether we think this is a good idea, it's being deployed, right? There are companies that are finding out that these are ways to reduce cost so that their decisions can be made faster with more consistency and more precision, and the data they have is being mined.
While I'd love to have this conversation, and folks at the Bank who have heard me talk about this know this stuff terrifies me. It terrifies me because, exactly, we don't know where this is going to go. There's a great unknown, but that train has left the station so we've got to figure out how we're going to manage it. I think that's one of the reasons why we're having this conference to really spark a conversation that we want to be a part of about, how do we manage this moving forward and make sure that we reduce the likelihood that concludes that the problem is us.
Quarles: I think that's a good conclusion.
Bostic: Thank you. Please join me in thanking Randy Quarles. Randy, it was great.