-
Conference information
- Overview
- Papers, Presentations, and Videos
- Speaker biographies
Policy Session #3: Cyber Risk in the Financial Sector
Given their place in modern economies and the potential profitability of stealing their assets, financial institutions are especially high-value targets for cyberattacks. Panelists Stacey Schreft and Jason Healey provide a high-level discussion of the evolving threat environment with moderator Trish Mosser. They considered measures financial firms can take to protect themselves from cyber risks inside and outside their perimeters, as well as measures financial regulators and central banks are taking to strengthen the financial system's resilience.
Transcript
Trish Mosser: As you can see, unfortunately, we're short a panelist this afternoon. Greg Rattray, the founder of Next Peak risk management company and the former CISO [chief information security officer] of JPMorgan Chase, is apparently in the middle of an MRI before going into surgery this afternoon, so he has a really good excuse for not being here. Anyway, our best wishes go out to him, obviously. Jay and Stacey and I will soldier on. I think it's quite possible that we will end up a little early, but everybody can just smile at that and say, "okay, we get an extra 10 minutes or so out in the sun" because it's really gorgeous today. Sadly, very sadly, the motivation for this session on cyber risk is greater today than it was last fall, when Larry Wall first reached out to me to see if a panel discussion on this topic would make some sense for the Financial Markets Conference. The world is an increasingly scary place, it seems, and that will certainly be a bit of what we'll talk about today.
For the last four-and-a-half years, I have been part of an interdisciplinary project on this topic, particularly to examine cyber risk in the financial industry, but with a particular focus on systemic risks. I think we're going to talk quite a bit about systemic risks as we go along. In that group, one of the very first things that we discovered was how tough it was for cybersecurity experts and financial market experts to actually communicate with each other. Both of those worlds are very, very technical. They're filled with jargon, which is sort of difficult to understand, and frankly, those folks speak different languages. Even worse, in some cases when they do use common languages, common words, they a decent amount of the time mean different things by them. Conversations can be a little bit confusing. Frankly, I think over the years, with as much expertise as has grown up in financial risk management and in cyber risk management, there remains even to this day a gap in the understanding between these two worlds, and a gap that, frankly, is increasingly dangerous.
Let me give you an example. Cyber experts' approach to risk management is very much about stopping the shock. Financial risk management would never focus on trying to stop shocks. It's impossible to do, right? Instead, it tends to focus on resiliency. "Can we withstand a shock this big? If we can't, how do we need to change our positions or change the way we're running the firm?" Moreover, the entire nature of financial activity is about the interconnections, interconnections with customers and clients, investors, borrowers, lenders, whatever. No shock or attack is really just about an individual company. It's about the fallout and the spillovers, and the contagion to everybody else. There are few things that the cyber world could learn, or should at least understand, about financial risk management, but on the other side of the ledger, one thing financial experts need to consider is that the nature of cyber-attacks is not like other shocks that the financial system gets hit with.
First, cyber-attacks are not random. They are fully intentional, fully intentional, and the threat actors who carry them out walk into them with incredibly specific goals. The most common of those goals are the intent to steal, or the intent to destroy, particularly data, or the intent to disrupt, and that includes disruptions that perhaps have national security implications. The second big difference is that cyber shocks are very carefully timed. The timing is not random, either. Cyber-attackers will sit in your system for weeks, months, and pick the optimal time when you're most vulnerable and when they can have the biggest impact. That's not true of most other types of financial shocks, or the way that we think about them.
There are things on both sides to be learned, and a goal of this session today is to try to bring both sets of expertise together, and to bear on that at the same time, and, we hope, maybe, maybe, close the gap just a little bit. We're going to cover three areas, and we're going to split the middle one in two. We're going to start with the cyber threat landscape. We will talk about developments and considerations in cyber risk management. Finally, we want to talk about what financial regulators and central banks are doing to strengthen the resiliency of the entire financial system. We have two great experts who are with us, fortunately. First, and who will be speaking first, is Jason Healey. Right now, Jay spends a decent amount of his time as an adviser to the Cybersecurity and Infrastructure Security Agency [CISA]. For those of you who are not up on national security acronyms, these are the folks and the agency that is responsible for Shields Up, which has been in the news quite a lot in recent months. In another life, meaning, his full-time job, he's a senior research scholar at Columbia, and he's going to start, at least, by talking about threat actors in the landscape. The second panelist is Stacey Schreft. Stacey is the deputy director of the US Treasury Office of Financial Research, where she is responsible for the research and analysis center. Currently she's on leave, visiting the financial stability division at the Federal Reserve Board, where she's working, fortunately, on cybersecurity. She's going to talk about official sector actions, regulatory and otherwise, to reduce cyber risks. I'll be quiet now as a moderator should and turn it over to Jay.
Jason Healey: Thank you. I've got a slide...if we can pull up the slide, no rush. Trish asked me to look at the cyber risk landscape and talk about how we got here, to a substantial degree. Much of the story of how we got here goes back to the early days of computing, because in the early days of computers, you didn't have to worry too much about computer security. Your computer security was you locked the door and you made sure that only people that could really be trusted to touch the computer were doing it, and that they had your best interests at heart. There was no systemic risk. You didn't particularly have to worry about intentional attacks, which were far smaller a concern than unintentional mishaps, of something going wrong. That all changed with the internet. Actually, it started to change when you had the mainframe, and then around, say, the mid-'60s, you started to add remote terminals and remote printers.
Now, locking the door wasn't enough. My field, computer security, really started in that 1969, 1970 timeframe. There was a report out by the Defense Science Board that said, "Locking the door is not going to be enough. What do we do about this?" They said, early on, this was 1970, that our security is...they didn't quite say is never going to be enough, but the attacker is always going to have the advantage. If it's only people with clearances in a locked room, we can secure it. Once you take away those constraints, we're not going to be able to secure it. It's going to be out of our hands, almost literally. We took that, and then we added the internet to it. You already had these individual clusters of computers that we knew that we couldn't secure, and then we added them to a network, so they're connected to every other computer, almost literally, in the world. Once we did that, then we started to have the systemic effects where the attacker doesn't just have to be in the locked room, they now have to just get into one of the remote terminals. They can be anywhere in the world.
I heard someone formerly of the FBI, Shawn Henry, say it used to be if you had to worry about the bank robbers, you knew geographically the bank was robbed a half an hour ago. Cars can only go so fast, so we know they're in this radius. Of course, with the internet, you don't have that because computers can be affected everywhere. More importantly, we know that fact about the bank robbers, it probably gets said a lot. We think a lot less, and we research a lot less, what that then means for systemic shocks. Not just systemic shocks within the internet, but then how that's going to affect all of the key infrastructures and other key systems that are then reliant on the internet and the related computing and networking. That's what my work with Trish has been on.
You asked about threat actors and the history of threat actors and how we got here, and it's almost stunning to the point of how little has actually changed since, I'll call it the mid-'90s. I did a history book, it was kind of a military history of cyberspace, and how states have been competing and contesting with each other, in part because I'd helped set up the very first Joint Cyber Command. Now Cyber Command has thousands and thousands of people, and it started out with just 25 of us. I go back to the materials from those days, in the unclassified materials, Trish, and it's astounding how little we were talking about cyber criminals. We were talking about the same states, North Korea, Iran, Russia, China, even the same kinds of incidents that we went through. To me, I thought the very first cyber conflict was one called, don't worry what it was called, Cuckoo's Egg, in 1986. You could take someone that was in that incident, and you could bring them forward into something that Russia is doing, or Ukraine, or that China is doing, or the US is doing today, and even though the technology has changed so much, they're going to understand what's going on pretty quickly, because those dynamics of what is happening in the networks have been pretty stable. The actor has been stable, the kinds of things they're doing are stable. Even look at the most substantial national security incident that we've had, the 2016 election interference. It would have been completely uninteresting by 1998's standpoint, right? Someone broke in, they stole email, and they released it. It's pretty basic.
What has changed is our capacity to respond. When Vladimir Levin hit Citibank for $10 million in 1995, Citi said, "We should have one person in charge." That led to the modern chief information security officer. In 1998, President Clinton signed a directive to say, "Let's have these things called ISACs, Information Sharing and Analysis Centers, and they might be like a CDC for the internet." That was actually in the decision directive, that now has gone on to work very well within the finance sector, so that we can share amongst ourselves that cooperation with Treasury and the Treasury agencies, through the FBIIC [Financial and Banking Information Infrastructure Committee] and the FSIC [Financial Services Innovation Coalition]. Don't worry about the acronyms, it hasn't gone well. The other big change has been, around 2003, we called it the rise of the professionals. When I was doing more cyber security and cyber conflict as a day job, you didn't have to worry so much. States were involved, but only in a low level. Cyber criminals were involved, but they weren't all that serious about it. Really, for the past 20 years the state started to say, "Wait a minute. This isn't just a little thing on the side. This is the main thing for us to get involved in."
I look at what North Korea, I used to think it was great that North Korea was trying to get into hacking, because hacking didn't do that much. I thought, there's only so much roots and grass to go around to feed them. The more they're feeding the hackers, the better. Now you see they're routinely pulling off $100 million dollar heists, routinely, commonly anyhow. Likewise for the criminal groups. It had been small criminals acting on a one-off, like Vladimir Levin hitting Citi for $10 million. Now you're seeing these criminal groups, organized crime, that are capable of so much more.
This slide is the work that Trish and I, along with a colleague, Katheryn Rosen, who is now at JPMorgan, and Alex Workman, a former student, said, "We've got cyber risks." That's on the left side of the chart there. Most of those cyber risks, the way that we talk about it, are going to be pretty familiar to anyone in the field. It won't be quite the way they think about it, but it's going to be close enough, especially in the actual report. On the right side was Trish coming in and saying, "The central bankers, here's the way that they're thinking about financial stability." We said, "How can we think about cyber risks affecting financial stability?" Fortunately, there was some great work on that transformation from cyber to financial issue, including from Stacey, to say, "How can one happen to the other?"
Why we liked this framework was it was good both to say, "If something bad happens on the cyber side, how can that transform into the financial stability of it?" You can also go from right to left and say, "For any given financial stability thing that we're worried about, how can we go from right to left and figure out what cyber issues might then trigger that?" It'll also help from the bottom up to say, "the amplifiers and dampeners."
Fintech. Is fintech going to amplify the cyber risk to financial stability because of the regulatory arbitrage, and getting out from under the regulation? Or, which aspects of fintech, to ask the better question, are going to dampen cyber risks because you're decentralized, you no longer have single points of failure to worry about, because you've moved to things like distributed ledgers? We're still working through that. The last I'll say is, so much of what we think about as cyber risk is inside the enterprise only, right? When Citi stood up, their chief information security officer, they were only worried about the security of Citi. We haven't done well expanding that risk horizon, of saying you have to start thinking about...we've done better on supply chain and outsourced...but we need to continue all the way out to external shocks, like the exogenous things like Ukraine, which I anticipate we'll talk about.
When I started to do this work on systemic risk, I had some wag that called it, it was 2012 or so, and someone said, "Ha, ha; it's cyber subprime." If you work in cybersecurity, every chucklehead is trying to take whatever they think is a bad issue and add cyber in front of it. The more I thought about it, the more I thought, "Well, it's actually a fabulous term, because prior to 2008 we tended to look internal to the enterprise for risk." We're chopping up the risk, we're selling it off, and we don't have to worry about where it's going because the risks are disaggregated. It turns out that we were wrong. They still had this cascading risk that could happen. The risks were correlated. That thought really helped us to think about cyber risk. We tend to be thinking only what happens in the enterprise, but in fact these risks are highly correlated, and in these pools of risk, which is basically what that bit on the left is. Those are different pools of systemic cyber risk. Most of us, we just don't really understand the systemic risks very well. Thank you.
Mosser: Thank you. Stacey?
Stacey Schreft: Great. I will start with a disclaimer while my slides are coming up. The views I'm going to express are my own, and not necessarily those of the Federal Reserve Board of Governors, the Federal Reserve System, the US Treasury, or any other organization you want to throw in there. When my slide's up, you're going to see that I'm starting with a graph of the transmission channels. Jay showed you his, and if Greg was here, he was going to show you just one, here it is, just one slide, which was a graph of a transmission channel, too, that shows you how you can start with a cyber event and end up with some impairment of financial stability.
I thought it was fascinating, that totally independently we all chose to show you a chart of basically the same thing, and that we were each doing that did not encourage any of us to take our slide out and let the other discuss it. I've been thinking about that through this conference. Why is this? I think some of what Trish said alludes to that, which is that cyber and financial stability as an intellectual area is really a pretty new area. I date it back to about 2015, in terms of economists thinking about it, the official sector of policymakers, really thinking about it.
It's not that cybersecurity didn't matter, but cyber and financial stability, that it really could affect the system and not just be a firm problem, is a relatively new idea. Jay was talking about how we're still not quite there, where we want to be, with understanding the systemic implications. I would also say that I think what I call modern financial stability analysis is a relatively new area, too. I would date that back to about 2008 because when I was working on it about 10 years earlier, people said, "We'll never have another financial crisis." [laughter]
Anyway, this is my transmission channel, and all of our transmission channels are really saying the same thing. We're just slicing and dicing them a little differently, given our different perspectives. Maybe someday before I retire, we'll be able to come in and say something about MV=PQ [M=money supply, V=velocity, P=price, Q=quantity], and everyone will know what we're talking about. I think we're not using exactly the same picture, and we're choosing to go through it to get everybody into the same framework.
I want to focus on shocks versus vulnerabilities. Trish was talking about shocks, and so shocks in financial stability assessments are these big, bad events, usually surprises, that if they're big enough and they're bad enough, and they hit a sufficiently vulnerable part of the financial system, they can impair financial stability. That's true with cyber events, too. The shocks in cyber are cyber events, and a cyber event I'm going to define as "an occurrence within a computer system or network that's not supposed to be occurring." It could be malicious, or not, accidents happen all the time, unintentionally and intentionally. It could originate from within or outside the organization. We've heard a lot of talk about cyber-attacks. If it's malicious, we would call it a cyber-attack. I'm using a broad definition here because the mistakes, the configuration errors, the unintended consequences, are all serious problems, but cyber events are occurring all the time. The financial system is bombarded with them, and the good news is that they're almost all defended against. They're non-issues.
Sometimes a cyber event becomes a cyber incident, a cyber event that ends up impairing a firm, one or more firms, in some way. I'm going to use the term "firm" for simplicity, but I mean anything other than an individual person, some business organization, a nonprofit, whatever. The impairment to the entity is something like a disruption of operations, could be a cost, a monetary cost or non-monetary, could be a loss of reputation, it could be a loss of confidence that results in what I'll call more localized runs on that entity, or asset fire sales relating to that entity. For that to happen, that big, bad shock has to be big enough and bad enough and hit a sufficiently vulnerable spot in the organization. Those vulnerabilities are weaknesses in cybersecurity, the weaknesses in defenses against cyber events, or an inability, given that you didn't defend too well, to respond and recover effectively and fast enough.
When that happens, when you have that big, bad shock hitting that vulnerability, your cyber event can become a cyber incident, and a cyber incident by itself does not impair financial stability. That cyber incident has to hit, involve, some vulnerability in the financial system at the system level so it becomes systemic. It has some bigger scale effect that can impair financial stability. What are those vulnerabilities? Those come from digital dependencies, or single points of failure, near single entities that provide a critical service for which there is a lack of substitutes, or substitutes that you can't switch to quickly enough. It could be that the loss of confidence isn't just limited to the directly affected firms, that it actually expands more widely to a broader set of firms or markets, so your runs and fire sales are occurring on a broader scale. Those are just some of the vulnerabilities at the system level that can allow a cyber event to have systemic impact. Now we've gone from your shocks, to your vulnerabilities, to financial stability, and it's usually at this point that, if I'm talking with people, they ask me if I mean fintech, or digital assets, or blockchain, all the buzzwords of the day. My answer is usually, "Yes, but you don't need any of that. You have plenty of cyber risk without that."
I want to take a moment to talk about cyber risk as a risk in the financial system, among all kinds of other risks. Usually, conferences like this, and financial stability reports, and just economists for decades, have been talking about financial risks and operational risk, and cyber risks are relatively new on the agenda. Operational risk is a really big risk in the financial system, and cyber risk is a big component of that. As our financial system digitizes, cyber risk grows, and it becomes a bigger share of operational risk. Banks and non-banks, with or without digital assets or blockchain and the like, are subject to a lot of cyber risk. When I say "non-banks," I'm using that generally to refer to anything in the financial sector that is not a bank, so I'm including financial market infrastructures in there, too. Fintechs, digital assets and blockchain, I think of those as all "digital first," or natively digital entities or technologies. They only exist in a digital world. They only came about then, even though there are still some humans involved somewhere.
As entities or technologies they are certainly subject to cyber risk. To the extent that they are used by or authored as part of service lines of banks and non-banks, they amplify cyber risk there. I've got vendors in my picture, too. You'll notice I don't have technology vendors, just vendors, because all vendors experience cyber risk and amplify risk and pose risk to banks and non-banks. When I talked about firm-level risk in my transmission channel diagram, that cyber security that's needed, I didn't say it at the time, but I meant that it is security of your third parties. Financial institutions are all responsible for ensuring the security of the third parties that they use, which is quite a task.
To think about vendors, I would just encourage you to think back to the Target store data breach, which I think was 2013. That was a fascinating one, because it came about partly because of how Target configured its systems, but the compromise that allowed it initially, that shock, was an event that occurred at a heating and air conditioning vendor. I don't know about yours, but when I think about heating and air conditioning vendors, I don't really think of them as being highly digital, or an obvious source of a compromise. I think that's just a sign of how all kinds of vendors can expose you to cyber risk.
I want to add in these circles, ovals, that have the traditional financial risks that we normally talk about at conferences like this. In the profession, we've talked about them for decades. Almost all of our data on the financial system is about financial risk, credit risk, counter party risk, interconnectedness, asset valuations, funding leverage. I've got this set up with the idea that it is overlapping all of the entities and organizations that I've got here, so banks, non-banks, fintechs, blockchain, whatever, they're all subject, even the HVAC company, to financial risk as well as cyber and operational risk.
I want to also make the point that the realization of financial risk amplifies cyber risk to an entity, and the realization of cyber risk amplifies financial risk. Take an organization that is experiencing some financial distress because of the realization of some financial risk. That firm is, in my opinion, you're the expert on this Jay, but that firm is a better target. Its management is going to be distracted. They might be putting their effort into trying to grow out of their financial stress, and not putting money into security of any kind, postponing initiatives to improve their cybersecurity. They might be thinking about cutting costs, or actually cutting costs, and layoffs might be on the table, or at least, anybody at a company like that is going to be thinking about what the future holds. With turnover, and management, and staff, including IT staff, just staff distracted, that makes the company a better target and more subject to cyber risk, or just accidents.
A firm that is experiencing a cyber incident is also facing increased financial risk. That goes back to my transmission channel. There's a reputational risk, and runs, and fire sales. It might not be able to make debt payments. It might face a liquidity squeeze. Your financial risk and cyber risk, when realized, amplify the other. These are all interconnected, and we need to be giving a lot of thought to operational risk, and cyber risk, and operational resiliency.
I'll use the rest of my remarks to talk about some initiatives that are underway in the official sector, regarding efforts to build resilience to cyber risk. As I thought about the initiatives that I know of, I don't know that I know of all the ones that are going on. I'm sure there are ones that I'm not even allowed to know about, and anybody in my seat on this panel probably spends more of their time in some areas than others. This is a high-level discussion, and I've chosen to put things in four different buckets. I'm going to start with efforts to mitigate vulnerabilities through the supervisory process.
Obviously, the official sector puts a lot of effort into mitigating vulnerabilities through supervision, and oversight of financial institutions and financial markets. This is going to that firm-level vulnerability that was in my transmission channel. It's really a bottom up, making sure the firms are not just safe and sound in terms of financial risk, but also cybersecurity. While a lot of resources go into this, the official sector is expanding the resources going into it and expanding its capabilities and evolving them as the threat landscape changes. Very recently, a notification rule for banking organizations and bank service providers went into effect that requires that banking organizations and bank service providers, within 36 hours of discovering that a cyber event is underway or was underway, whenever you discover this, has to notify their primary regulator or supervisor of the event. This is a big advance, because it puts a time limit on it. It requires it of all of these organizations, banks, and their service providers. It means that, while maintaining the confidentiality of the reporting organization, the official sector can make sure that "shields are up" as much as possible for the rest of the financial sector, so that we reduce the odds that other entities have the same experience, even where accidents and the like occur, configuration problems and things like that. Those are all lessons to be learned, and we're more likely to learn about them.
The second bucket is collaboration across the official sector, and a lot of collaboration goes on domestically and internationally. A lot of this takes the form of tabletop exercises and scenario analysis. The G7 and the Financial Stability Board at the international level are involved as well as other organizations. A lot of it generates communications plans and response playbooks. Response playbooks are really important. When I think about cyber risk, I often think about the local fire department and fires and try to do the analogy. I think of the response playbook as being like your local fire department knowing which fire hydrant is closest to your house, which you really want them to know, and that it's working. In a typical situation of financial distress, maybe the official sector can email a representative at the bank and see how things are doing and that communication will work. In a cyber event, it might not. The email system might be offline. Voice over internet might not work. You need to really think through what the response playbook is. Because cyber risk respects no boundaries, geographic or jurisdictional, it's important to really make sure that communications are good about what's going on where.
A third area of work is in monitoring vulnerabilities to financial system stability. This gets at both the firm-level and the system-level vulnerabilities. The Treasury's Office of Financial Research, the Federal Reserve Board, the IMF, the ECB, pick almost any financial sector-related official entity, they generally these days put out something called a Financial Stability Report, in which they report on their assessment of vulnerabilities to the financial system, and progress in mitigating them. Related to this, we're increasingly seeing research being done, longer-term research projects, that study actual cyber incidents, or hypothetical ones. Fortunately, we don't have too many actual ones in the financial sector, at least, ones we talk about. Increasingly, if we can know about an incident, or hypothesize it, you can do quite an interesting analysis, because if you know exactly which entities experienced the cyber event, you have this nice controlled experiment that you can do. Research is coming out, and that helps us better understand the vulnerabilities, what spillovers could occur, might occur, did occur, and that mitigants used, and how they worked and to quantify them all.
The last area is efforts to expand our understanding of cyber risk measurement. I said at the start that cyber and financial stability is a relatively new area. We've had data on loan defaults for my whole life. We're pretty new about thinking about cyber risk and how we measure it, not just within a firm, but getting data that we can look across firms, look at the effectiveness of different controls. How do you assess the vulnerabilities if you can't measure them, if you can't quantify them and track them over time, and see if your efforts to build resilience are making progress? There are a lot of efforts here, and a lot of informal ones. One more formal one I know about is a collaboration by the Federal Reserve Bank of Richmond, the Federal Reserve Board of Governors, and MIT, where we are talking among ourselves but also bringing to the table representatives from the private financial sector to talk about, "what is cyber risk?" How do we define it? How do we view the transmission channels? How do we measure it? What data do they have available? What data do we have available? What data should be available? A lot of works going there. With that I'll say, "thank you," and I look forward to the discussion.
Mosser: Thank you, Stacey. We do have a few questions from the audience, which is great. I've got to say, though, I'm going to take the moderator's prerogative here and follow up on a couple of things first that, at least so far, I haven't seen in the questions from the room. I have to bring up geopolitics, since it's just top of mind for everybody these days. We've heard more about political threats from a macroeconomic perspective in the last couple of days than we typically do at a conference like this.
Jay, you mentioned early on that geopolitical risks, threat actors by states, nation states, have been growing. Just from a threat landscape, as opposed to theft and criminals, et cetera, should we be thinking differently about how to defend? Do you think differently about how to defend if it's a nation state doing the attacking, and if they have political purposes, not just to steal or whatever? I'm curious.
Healey: Not necessarily, for most of it, and especially the closer you are to the technology. When I talk to my colleagues that are the real cyber defenders, they'll say, "I don't care. It does not matter to me. I've got someone in my system, I have someone that's trying to do bad, it's my role to stop them. I simply don't care. It doesn't help me do anything differently."
Those tend to be the real technologists. I had a colleague 10 years ago who said, you know who it is, he's Australian, that said, "If we know at our bank we are going to be doing a deal, or one of our clients put in a bid for Brazilian pre-salt oil exploration rights, we'll know which Russian and Chinese companies are going to be bidding against them. We use cyber threat intelligence. We know these groups well enough to say, 'If one of them is a Chinese oil company, which Chinese cyber threat actors tend to support that Chinese oil major?' We'll also know what techniques they try and use. Do they try and use phishing? How do they operate? We'll go to our division within the firm that's involved in this deal, and not only set the protection to be on the lookout for that group using those techniques, but we'll start looking. Maybe they were already there, and they're hiding, and we just don't know it."
It was very advanced use, and that was 10 years ago. To bring it up to the moment, there's some debate. I sit with the international relations crowd, and there's some debate about whether cyber is really going to ever be useful for use on the battlefield or any kind of real national, security-style disruption. I come down on the side of those that say it certainly will, because I've been so trained by all of the systemic risk and seeing how these effects might cascade.
There's been an argument for a while that, why would states ever do that? Or, why would criminal groups, why would anybody, disrupt the internet, or use the internet to really disrupt an infrastructure, because that's how we're making money? Or, that's how criminal groups make their money, but that's how states are involved. They're entangled, and so it's not in their interest to disrupt, for example, the finance sector, or to disrupt the energy sector. That is absolutely true, but it's also a 1914-style argument, right? It holds true until the moment where some state decides, "You know what? You're going to deal me out of the dollar economy, I have nothing to lose. I have a lot less to lose than you do if I disrupt SWIFT. I can't play in SWIFT? You're not going to deal me in? Great. I'll flip the table." Seeing especially Russian groups operating in European liquefied natural gas terminals, Russian cyber threat, actually, it doesn't take a lot to imagine under what circumstances they might decide enough is enough and they're going to disrupt that. Fortunately, we haven't seen, we did see prior to the invasion, and we haven't seen substantially since, much against the finance sector, at least the last that I heard.
Schreft: What do you say after that? You know, I'm not going to scare you as much as he just did, but you talk about geopolitical risk, and in the framework it's a shock. It's just another shock, and you need your shields up the way you would for anything else. Listening to Jay, I thought he gave a great argument for how interconnected the cyber and the financial risk is, and that within a business how interconnected those have to be. Pretty amazing.
Healey: Before we go to the questions, can I add one to...because I really like Stacey's graph that had the four areas of where we're making progress. I thought that was really, really useful. I might add one. I'm curious your thoughts on what's happening in the private sector. The private sector has gotten together and created the ARC, I forget what the "ARC" stands for. It had been the Financial Stability Analysis and Resilience Center, and now I guess it's the Analysis and Resilience Center for Systemic Stability, or something like that...the Section 9 banks getting together and saying, "We're going to put our money in to do a better job at this."
My wife's a credit risk analyst, and I was astounded, she had been with Fitch when we met, and I was astounded when I went to Moody's because they were having a little cybersecurity working group. I'm like, "Isn't that cute? They have a cybersecurity working group. We'll see where they're at." They had every one of the business verticals around the table, industrial FIs, sovereign, sub-sovereign, automotive. They had 25, 30 that were all saying, "For our ratings, how are we going to include cybersecurity to improve our ratings when it comes to debt?" I was just super impressed, and that was five years ago, probably. Anyhow, it was pre-COVID, so who knows how long that was. Really seeing that the businesses, for their own interests, and for in many cases their own profit or their own resilience, are really taking some really interesting steps. Sometimes that's related to what the regulators say or want, but a lot of times, like with Moody's, it's not.
Schreft: There are a lot of private initiatives, and public-private initiatives, because they're more comfortable sharing information with each other, but they also know that they're threats to each other, inadvertently.
Mosser: Absolutely. I'm going to turn to two related questions that are sort of the flip side of the same coin that have come in, because they're closely related to a question I was going to ask, so I'll use these instead. The first one is, "The banking system in the United States, and, frankly, quite a lot in of other parts of the market, still depends on some aging technological infrastructure. Does that technological deficit increase risks? If so, how can we encourage a modernization?"
Schreft: I'll start, because as I was listening to that I was thinking about the argument that's often made about our nuclear weapons, and how they run on DOS, and so...
Mosser: This is the scariest panel of the day. I'm just saying.
Schreft: There's aging, aging, and aging here. It can help, but it's also a big risk. People would argue that the complexity that comes, and the problems that come, with having what they call legacy technologies that don't play nicely, there's just a complexity there that's hard to understand how everything works together and it increases risk.
Healey: Yes, and I was just thinking about the nuclear weapons example. If it's really obscure, and it's really hard to get to, and you get a couple of other "reallys" in there, then it's security by obscurity. It can help, because the adversaries have budgets, too. If you're using something obscure, that makes it more difficult for them to get in and learn it. That's a very, very limited thing, and I wouldn't expect much out of that privilege. It's much better to be moving off of the obsolete operating systems and the rest.
This is one thing that really surprised me when I got to New York, about how the New York CISOs, chief information security officers, felt about the cloud. In Washington, DC, when I talked to government people, they'd say, "The cloud is great, but," and then they'd go into the litany about the issues about cloud security. I get to New York and I'm expecting the same, and they say, "The cloud is great, and we haven't even begun to see the security benefits that you're able to achieve at scale at the cloud." They said, "Yes, there are security problems, but the security problems are far more manageable than those of the legacy system." The internet, the technologists don't want to tell you this, but 50-60 years ago when we were inventing that, they said, "No, it doesn't work that great. We'll put some band-aids on this stuff, and we'll come back and fix it." We never went back and fixed those band-aids. It was never built with security in mind. It just wasn't. They considered it, but they didn't really design for it. We've got six decades of these band-aids on all this stuff. What cloud allows is, substantially, you can say, "Forget all of that fundamentally insecure foundation, and we're going to build it on this newer foundation." We're still working through issues, especially having enough people that know cloud security.
One last thing I'll say, to some of you, especially for the firm level on Stacey's chart, it's allowed firms to really outsource, go to the cloud service providers, and really buy down a lot of their cybersecurity risk. It has increased the systemic risk, because we've traded off that cybersecurity risk and ended up going long on concentration risk with just a few vendors. Fortunately, those vendors are putting a lot of investment into that area, but it is one of those areas to look at, the systemic risk.
Mosser: I think Jay just gave his answer to the second question that was here, because, let me read the question: "Can you elaborate on banks' and financial markets' migration to cloud, and the concentration of risk to the financial system?" The economist in me just has to stand there and say, "Okay, you understand if you have a really, really safe basket, you put all your eggs in one basket and then you really, really watch the basket closely, but you are putting all of your eggs in one basket." I have to say, I have had the cyber security folks convince me that, given the sheer cost of cyber security for small- and medium-sized enterprises, medium-sized enterprises in particular, the benefits of cloud far, far outweigh the concentration risk simply because of the sheer expense of trying to do it yourself. Is that still true for the big guys as well?
Healey: Yes, it's way too complex to try and do, especially for the small- and medium-sized, and for the large. I do think that best practices is leaning towards, this is something that Greg would have been good at, having separate cloud providers and having the technical ability and practicing the ability to switch back and forth between the providers. Because we're going to lose a cloud service provider, right? It happens periodically. Not for very long. If we're able to swap the traffic and the load from one to another, then I think that buys down a lot of that risk.
Schreft: We're not quite there yet, being able to do that. Yes, I guess I would just throw in that we have a very digital financial system today. It's hard to imagine it getting much more digital, but I'm sure it will. Those digital production functions essentially have zero marginal costs, so they're trending toward single providers. It's, in some cases, only going to be a demand, maybe a subsidized demand, a sponsored demand, for redundancy that ends up leaving us with two providers of anything. Then you're going to have integration, where it makes sense that, say, an Amazon or a Google just, if there's some new part, new aspect of production that's digital, that they just acquire it and add it into their own production line rather than having it external, which is useful, but more efficient. A bank doesn't have to do business with too many vendors. It can just do business with one and have a one-stop shop.
I can think of a case where a technology provider, especially for small- and medium-sized banks, that instead of having lots of vendors for all your apps and digital functions, you're getting them all from one service provider. If that service provider is down, you could have 30, 50 different parts of your business line that are all not working because you have this one vendor. Now you've managed your vendor risk more easily. You only have one you have to track, and make sure it's safe.
Healey: There might be some role for supervision, of saying, "If you're providing IT services to finance from the cloud, then you have to make it easy enough to switch back and forth," and it's up to the cloud providers to do that. I think, am I getting into the Bank Secrecy Act? There was some regulatory authority over technology providers to banks, and so...
Schreft: Like the Bank Service Company Act?
Healey: Yes, yes. Do you think there's a role for supervision in that, or is that kind of...?
Schreft: I don't know. This is really a question for a lawyer, I think, whether that act allows actually saying you have to provide the interoperability. The competition would point to not having interoperability. I think it would be some big customers that could basically say, "I'll do business with you if you offer that," and I don't know if that will happen or not.
Mosser: Now that we've talked about concentration, I want to go the other way. Stacey, you mentioned this in your remarks, just the explosion in financial innovation, technological innovation, in financial services like fintech, crypto, as you said there's the laundry list of what you want to call it. The associated fragmentation in parts of finance, distributed finance, if you like, how has that affected the ability of the system to withstand attacks? Do you think it's made it more dangerous? Or, has it diversified things?
Schreft: I think, like the other questions, the answer is "yes and no." It does both. It creates more vendor risk. You've got more of these technologies, API is coming from more entities, and you get longer and more complex digital supply chains as a result. A lot of these fintechs end up outside of a lot of oversight. They're technology firms, not so much financial firms. It really is up to the banks and other financial institutions to try to monitor the cybersecurity. I've heard from some of the people at large financial institutions that basically, if they're going to use someone as a vendor, they basically say, "We're doing your cyber security for you." If they're going to use a fintech, they're going to do the cybersecurity for the firm.
How that works with competition is if, say, one big bank is doing that, you can kind of see where eventually they're acquiring the firm, and you would expect to see that you get this fragmentation. If there's something that's successful, it's all going to get consolidated and end up in one entity that is owned by somebody, or some few big dogs have all their entities. You get it both ways, and a transition that's very complicated.
Mosser: Yes, the complexity of the transition seems to be particularly important.
Healey: I meant to add on to Stacey's great comments on vendors. When you were talking about vendors aren't just...I remember where that really hit me was, one of the banks was going through every one of their contracts and saying, "Out of the companies that we work with, who has our data, who has what access to our systems?" It was incredibly intensive. They had a managing director overseeing it with a team of like seven people, and it was more than a year. That's a pretty expensive effort. The lesson that came out of it for them was the law firms just absolutely stood out, the criticality of the information that the law firms had, and the terrible security. That led to some early association with the FS-ISAC [Financial Services Information Sharing and Analysis Center] and New York law firms, saying, "You've got to do better," and I suspect it's really getting better now. Also, the banks and others saying, "How can we do this at scale? Reviewing every contract is way too intensive. What else can we do differently?"
That helped lead to the rise of companies like BitSight and SecurityScorecard. You might not have heard of them. They're basically doing a FICO score for cybersecurity. They call it that. It's probably a little bit more of a credit rating: "What observable things do we know about this company's security? Most obviously, if we scan their external perimeter are things routinely patched? Does it look well-managed? Also, do they have someone that knows cyber security on the board?" There really is the range of very technical up to the very organizational, and then you get a score. Like FICO, I think it's, I'm pretty sure it's 200-800. I know that some of the banks have just said, "Instead of going through all that contract nonsense, you're a 450 now. If you can get yourself to 650, you can be one of our contractors." They just leave it like that: "However you get there, that's up to you, but you've got to get your score up."
Like credit ratings, there's still a lot of problems with this. I know if our other panelist had been here, Greg was never happy about that when he was the CISO at JPMorgan. This is why banks and others have investor relations. You work with the people that are rating you and share with them nonpublic information that might affect the rating and help the analysts to understand the cyber risk of your particular company.
Schreft: Jay's comments made me think of two things. One is a report that the banking supervisor for the state of New York put out regarding SolarWinds, where I think in a study of 100 banks in New York state that they oversee, 80 of them had SolarWinds, but the majority had not listed SolarWinds as a key vendor. They used it. They just didn't really recognize how central it was to operations, and a risk. That's one point.
Then related to, you mentioned BitSight and SecurityScorecard, and some of these cyber ratings. Fitch has done a couple of studies, one on banks and one on insurers, where it took the cybersecurity ratings from the firms and compared it to their credit ratings. No matter how good your credit rating is, there are always some firms that have not great cyber security, but firms with better credit ratings did tend to have better cyber security, pointing to the fact that risk management and governance is either better or not at organizations, and it applies even to the cyber realm.
Healey: Especially for the researchers here, it's one of the most basic things that you could do for research. We wanted to do it, Trish and I wanted to do it first, and we didn't get there. You've got a credit rating, and you've got a cyber security rating. Let's just compare the two. It is as basic as you can get, and it is still, I won't say state of the art, but there's so little that's been done that even that "let's take a little from column A, a little from column B," you end up with something that's getting talked about at the Fed FMC conference. There's a lot of low-hanging fruit.
Mosser: Speaking of lessons to be learned, the IT industry has pretty well-established protocols for critical vulnerabilities, how to manage them, how to disclose them. Are there things that the financial sector could still learn from the way the IT sector does things?
Healey: I don't know. That is a great question.
Mosser: I have to say, the CISOs are going the other direction at this point.
Schreft: I do have something on this. I've gone to some of the people who really do cybersecurity and policy and stuff. I wanted to know or see that financial institutions are not just, say, again to my firefighter analogy, not just having plans to evacuate a building and doing fire drills, but we do see fire departments that will take a building in the middle of nowhere and set it on fire, and practice, and they'll do different scenarios. Don't you want to have replicas, some small scale of your business, where you can run through different scenarios and really practice? Because having contingency plans and not practicing them regularly, we've found does not work well for cybersecurity.
I've heard that for IT, there are companies that will set up kind of a replica of, say, a big bank's or a medium bank's IT infrastructure. You can go in and do a lab where you can practice, you can see what happens. What I've heard is missing from that is the business lines, and how the business integrates. What we find whenever there's some real cyber incident is that we're always surprised, and the financial sector is surprised, by how interconnected the different business lines are, and what breaks and doesn't work when something in IT is not working. I guess we all are, too, when our own IT doesn't work too well, and what it is you can't get to and what it is you can't do. Imagine when you're relying on your bank or your asset manager, and you don't have access to your funds.
Healey: Many of the FIs are just so old and established, you end up like your first question. You've got this infrastructure that's been around a long time, and in a lot of the ways that the IT companies don't. Not all do, though. I've heard Marcus, when Goldman launched, they were largely able to come in with newer IT than they had.
I've just been impressed by some of the tech companies, the risks they were willing to take to improve their internal infrastructure and to continue to test it. The first time I was hearing someone from Netflix on chaos engineering, they are routinely, as a matter of normal network operations, knocking servers, networks, even whole subregions offline. They are programming failure in to just say, "all of these items are down in a fairly random basis." I'll be curious if it was random or preprogrammed, and to say, "Let's see how the system is going to respond."
They are constantly testing, almost like your immune system. You're always testing it, and you're always being exposed. That way, they say, when they actually have the outage, it's no big deal. They turn off the chaos engine to stop it, and then they're able to sail through. That's one of those extreme ideas. I brought that up to someone in the Pentagon. I was like, "Do you think they could..." I didn't even finish the sentence. "Do you think you could..." and he says, [laughing] "No, we can't do that."
I was curious why they dismissed it. Is Congress just never going to let us do that? Is it because your systems and your mission are just so important that you can't do it? Well, this is Netflix. If Netflix fails at this, they go bankrupt, right? They care pretty substantially about not going bankrupt. Maybe not as much as DOD doesn't want to lose a war, but they take it really seriously. Is it because you've got too much legacy infrastructure, because you can't train people to do it? Or are you just saying that because it's just hard? If you're doing it just because it's hard, I'm not going to laugh along with you then. Those kinds of things, that's a pretty extreme one even for IT, but get right into that legacy infrastructure.
Schreft: That applies to all industries, all the critical infrastructures, energy. Texas and their electric grid, for example.
Mosser: We could pick this one out about financial services. We would have even more to talk about. A very high vote-garnering question is: "How would you describe the threat landscape, either personal or institutional, for academics who are conducting cyber risk research? The threats would be from cyber threat actors themselves." I think this is probably more for Jay than Stacey.
Healey: Yes, it's been pretty substantial, especially for those of us in public policy schools. In the econ department, maybe not so bad, but when our acting dean is a China scholar and working for the State Department, we know that we're going to be having the focus on us. Fortunately, Google and others, especially Google, have great extra protections that you can sign up for to lock down your account even further. I don't keep anything on my laptop. There's really nothing there, it's all on the cloud in case I lose things, or I get messed with, right? Two-factor authentication. It was hard, to get myself up on the cloud. For think tanks, also. The academics and the think tanks are all pretty highly targeted, or the geopolitically motivated think tanks.
Mosser: Here's an interesting one: "Does the fact that the Fed is decentralized and has 12 Reserve Banks make it more vulnerable to cyber-attacks, more than, say, a standard centralized bank structure?" My understanding, the answer to that is "no," but Stacey may know more than I do.
Schreft: No, that it's not...?
Mosser: Not particularly, since its IT is pretty centralized.
Schreft: IT is pretty centralized. We went past that at a point, before cybersecurity was as much of a problem. I would say that the same benefits and costs that we talked about with cloud come from having it centralized.
Mosser: That doesn't centralize all the staff or all the business decisions, of course, which is still decentralized in quite a number of aspects.
Schreft: It's also, as any major IT provider would be, there's different locations and servers in different places, and lots of backup, all that good stuff.
Mosser: Yes, exactly. Okay, that's good.
Healey: Can I pick up on a previous point that Stacey made? Because you'd been talking about resilience and practicing for it, and things like that. It's a point that I make a lot. If you remember the chart that, actually, that we both had, our transmission chart. It talked about the stuff that happens at the firm level. If you look at most of the effort of my community, and most of the investment, where the venture capital money is going, if you go to our conferences, you walk the vendor floor, almost all of it, 95 percent, is aimed at improving technology inside the enterprise. The vast bulk of it. "What can we do as a company, or as the Fed, or whomever, do to better protect, better detect, better respond?" There's almost nothing about the system level, very, very little.
Part of what I've been trying to do is to get people to say, "Where has the smallest turn of the screwdriver given us the most advantage over attackers?" That's been over things like Windows Update. It used to be that Windows would just patch willy-nilly, and they would push out whenever they had a patch ready. That made it very difficult for us as individuals, it made it very difficult as companies, because you never knew when it was going to come, and you never knew what to expect. They made a technology, Windows Update, that would do it automatically, so now your computer could say, "I need to update," and it does it, as well as a process, Patch Tuesday. Now they patch once a month. Everything comes out on that third Tuesday. The hackers say, "patch Tuesday, hack Wednesday," because once the patches come out, they know what they're supposed to be hacking.
We tend to overlook those operational innovations. Where I was really going is, because so much of our investment is for each individual enterprise, someone comes up with some cool new hack, or your company buys some new technology, and you have to buy some new counter-hacking device, a widget that goes on your network, then you have to train people for the widget, you have to integrate the widget into the rest of your architecture. There's a lot that goes behind it.
Whereas resilience is a general-purpose investment. It almost doesn't matter what hits the firm, whether it's a cybersecurity incident, whether it's an outage, whether it's an earthquake, whether it's a death of key personnel. If you've been practicing for that resilience, having your cybersecurity teams, your physical security teams, all the way up to your boards of directors, running these drills and knowing how you're supposed to operate, when you have the crisis happening it kind of doesn't matter where that shock came from, that shock to your enterprise. It really is one of the best general-purpose investments, and I really like to see it for individual firms as well as the finance sector. I don't know, are we still doing the quantum dawn exercises, or the Hamilton exercises? I'd really like to continue to see those.
Schreft: When you talk about that new widget you have to install, what I thought of is you need to assume that that widget comes with a zero-day vulnerability, that it comes with something that is just waiting to be exploited. The question is just, what's your resilience to that?
Healey: It just adds to the complexity. Your problem is that you have an overly complex environment that you can't keep patched. You then add another system, and as the complexity, and your attacks...
Schreft: It's very hard to be resilient to all of that.
Mosser: A couple of last questions about more recent developments, and whether they're increasing or decreasing vulnerabilities. One is about work from home, and so many nodes being in individual homes, rather than, shall we say, somewhat more secure sitting in an office. How much do we know about the vulnerability of that for financial firms in particular? We could do this for a lot of other parts of the economy, too, but let's stick with finance for the moment.
Schreft: We know that at the start of the pandemic, they weren't, in many cases, as set up to work from home, because that's one way to keep them in the office. If you don't let people work from home as much and do certain functions from home, then you don't have that problem. Of course, they had to shift to that. There's been a lot of improvements in that, but when you think about Shields Up and those defenses, the first part of cybersecurity is knowing what you have to protect. It's called "identifying all your assets." This is why once or twice a year people in the physical office, IT, would send people in to crawl on the floor and identify and get serial numbers from every device, your phone, everything, because you have to know what it is that you have to protect.
It's surprising how hard that is, but that by itself you would think is not the area you would fall short on. It's a very human physical track, keeping track of your devices, but when people are at home, and it depends on whether you let some mix of personal and business devices be used, it just gets very, very complex. That's actually one of the things that we think about supervising financial institutions and their cybersecurity. How well are they doing at identifying the assets that need to be protected? It's a big deal.
Healey: Yes, that's a great answer. Fortunately, finance already had a lot of tools in place. They just had to accelerate it, I think, a little bit compared to where they were. What I think we're worried about now is the return to work. We'll get through the first transition and be able to optimize on the work from home and take care of things. Now, people are used to bring your own device. They're used to a lot of other insecure work habits, which we really don't take care of when they're working at home. Now that those things are going to be coming into the office, I think we're in for a bit more of a shock. I'm sorry. I didn't mean "shock," in this audience. "Transition."
Mosser: One last question. This is actually something I'm very curious about from the financial stability standpoint. The move toward open banking, and the linkages basically between fintechs and traditional banks, would allow for some pretty significant assets to banks and their data through APIs. Is that a manageable risk, or not so manageable? What do you think? I'm curious about your views.
Schreft: My expectation is, down the road, that APIs that are useful get acquired and integrated, and they're just part of some package and they're not this third entity. Right now, you can have technology firms that, they have this one API, that's where they make their money.
It always surprises me, the extent to which people use certain banking apps and the like, and they don't realize that each step is actually some vendor with a technology that your data is passing through. If you actually read the fine print that pops up initially, usually as a customer, asking you to agree before you use something the first time, you probably wouldn't use it, because it's basically telling you that you're giving access to all of that. I guess there's the cyber risk to the firm. There's also risk to individuals using it, and in the short run there's more risk.
Mosser: It struck me as, on the retail side, in particular, actually fairly significant, potentially systemic cyber risk, depending on how the app is used and how the banks are monitoring their vendors.
Schreft: These are purely technology entities, that are not subject to any oversight.
Mosser: Yes. No, I was thinking about the banks overseeing them as vendors, but yes, I completely agree with you.
Schreft: There's a lot of them, all the little APIs that are going into things.
Mosser: Absolutely. I promised you we'd end a little early so everybody could go out and enjoy the sunshine, so I think that's what we'll do. Thank you all very much. Thank you, Jay. Thank you, Stacey. I appreciate it.