Artificial intelligence (AI) has become a fact of life, but potential advances in AI have elicited alarm from some observers. This session examines why some traditional methods of regulation might fall short when it comes to managing the risks associated with intelligent and autonomous machines.

Transcript

Matt Scherer: What you probably aren't as aware of is that law tended to be less formal back before the Industrial Revolution. Legislatures met infrequently. There were no specialized agencies. That was an invention of the postindustrial era, and there were really no large-scale public risks, at least none that were created really by humans, and risks therefore tended to be managed to the resolution of individual disputes. Then industrialization happened. The economy became more centralized and industrialized. Society became more urbanized, and law became more formalized.

So, how did industrialization impact law? Well, there were new challenges that industrialization created for the legal system, things like defective mass-produced products, workplace hazards, environmental threats, and large, powerful companies that could dominate entire sectors if they were not managed and regulated by some larger form or more effective form of government than had existed prior to industrialization. The bottom line was that existing legal mechanisms weren't able to cope with the effects of these new public risks.

Now, what is a public risk? This is a term that refers to a potential source of harm that is either centrally or mass-produced or widely distributed and that's largely outside the control of the individual risk bearer. In other words, it's something that could harm a lot of people, and those people who it could harm don't have any way of stopping the harm from happening themselves. Examples of public risks include nuclear technology, environmental threats, mass-produced consumer goods, mechanized transportation. The question is, do autonomous systems fall into this category? Note that all of these things were things that didn't really exist prior to industrialization. There were environmental threats, but not created by mankind and certainly not on a scale that represented the same level of threat to society's safety and health that the industrial era's forms of public risk did.

So, what were the methods of risk management that arose or were strengthened after the Industrial Revolution? Well, I've got this little grid here showing the different forms of risk management that had been in place and that have traditionally been used to manage risk since industrialization. On the formal side of the ledger, you've got what I call both preemptive and reactive forms of risk management. Preemptive are things like legislation, agency rule making, and subsidies that are designed to either control or influence behavior before that behavior actually causes harm or creates risk to society. Then, on the reactive side, is the common law. I have a soft spot in my heart for the common law, and we'll talk about the reasons why for that in a second, but it tends to be a more reactive form of risk management than the items in the top-left corner of this grid.

On the informal side of the ledger, you've got industry standards, which are designed to be preemptive. They're designed to influence people's behavior before harm occurs, and then on the reactive side, you've got the free market, or consumer choice. Of course, there is a preemptive element to the free market as well, but the reality is that, for the most part, consumers usually don't take their money and go elsewhere as a result of risk until harm has already occurred. It tends to be a more reactive form of risk management. The big question for AI is, will these industrial-era methods of risk management be capable of managing the risks associated with AI and autonomous machines? The purpose of my paper and the purpose of this talk today is to discuss why that's going to be very, very challenging.

Let's talk about the things that were on the informal side of the ledger. Why doesn't the free market represent just the perfectly fine way to manage the risks associated with public risks? The first is information asymmetry. This is particularly salient in the context of risk management because producers have more information about the risks than consumers do, or, for that matter, than regulators and competitors do. This is particularly acute with emerging technologies. When a technology is completely new, there's a very small cadre of people who are going to be able to make even a semi-accurate assessment of what sorts of risk the technology poses. That is perhaps even more the case today with the big five tech companies in the United States and the big three in China that increasingly are concentrating basically the gathering of data, and the data is, of course, the currency and the food for artificial intelligence systems.

The failure of the free market in managing industrial-era risks is really what led to the rise of the regulatory state. There was a certain recognition at some point that it wasn't enough to just let the free market run its course, that the government had to step in, at least in some circumstances, to manage risks that were being created by industrialization.

What about insurance? Insurance is a more promising market-oriented approach to managing risk. The problem is that it's very difficult to estimate risks with new technologies for a lot of the same reasons related to information asymmetry. It's also very difficult to insure against large-scale public risks, and that, of course, is the reason why when large-scale natural disasters happen, it's the government that is primarily responsible for both managing the risk in advance and then handling the cleanup afterwards. The free market has challenges when it comes to managing public risks, and for these reasons, I think that they would be particularly heightened in the context of emerging technologies.

What about industry standards and self-regulation? Well, this is often a question that I get from people, and there is maybe more promise for industry standards and self-regulation in the case of artificial intelligence in part because the tech companies are at least making noises as if they understand the level of risk that their products may pose, or at least level of anxiety that their products are creating. But industry standards, on the whole, tend to be ineffective for a couple of related reasons. The first is the fox guarding the henhouse problem.

Industry effectively decides what level of risk is acceptable for the public when you have industry standards being the primary form of risk management. That's only going to work if industries' interests are very closely aligned with the interest of society at large. That's rarely the case. I said for large companies, but really for all companies. The reality is that firms act in their own private interest. That's how the free market works, and if their interests aren't aligned with those of society at large, then they're probably not going to make the same decisions as far as risk management that the broader social structure that they exist within would prefer.

The other issue is enforcement, and this stems from the fact that market participants in an industry standard–setting body can avoid the restrictions by simply leaving the industry standard–setting body or by never joining it in the first place. What you usually end up with when you have industry standards is one of two things. You either have weak standards that are ineffective and don't do all that much or strong standards that are never actually enforced.

Those are the reasons to be concerned about informal forms of risk management from the postindustrial era. What about the things on the formal side of the ledger? Well, unfortunately, there are just as many barriers, if not more barriers, to traditional regulation as a form of risk management than there are for the informal forms. Let's talk about some of those shortcomings. The first is pretty simple. It's the fact that machines aren't people, and legal systems operate by acting upon persons. Even when you have, in a legal entity such as a corporation or an LLC [limited liability company] or some other form of artificial legal person, if you drill down in the law far enough, you always find that there is an assumption that there is a rational human decision maker with independent agency who is responsible for making the key, legally significant decisions. That's just an underpinning of our entire legal system, and the idea that someone or something other than a human being can make legally significant decisions is something that our legal system has never had to address before.

There's also concerns with foreseeability. This is a concern because the law generally hesitates to punish people for harm they couldn't have foreseen, and with machine learning—and you heard a bit about this yesterday—even the designers sometimes aren't able to fully understand or describe, certainly describe, for why their machines do something. That makes it difficult to assign and allocate responsibility in a way that allows for effective deterrents.

There's also concerns associated with control. This stems from the fact that autonomous systems' priorities and incentives may not align with ours even if we program them. In Peter Norvig and Stuart Russell's textbook on artificial intelligence, there's a great example. If you tell a machine, an AI system, "End all human suffering," that's your goal, the system says, "Oh, that one's easy. We'll just kill everybody. Then there'll be no more suffering." Well, the young machine may well understand that that's not actually what you meant, but it doesn't care necessarily because what it's focused on is what you programmed it to do. Even if we are responsible for programming the machines, it's going to be very difficult to make sure that their values and the course of action that we want them to take aligns with ours, and that creates a control issue.

The last set of concerns relates to what I call wind shear. It refers to the fact that in the modern economic climate, regulators will have to deal with two cross-cutting trends, one toward atomization and the other toward concentration. It's somewhat like stagflation. This is a crowd that'll get that. The forms of regulation and intervention that tend to be effective in addressing one of those two trends tends not to be so effective in addressing the other.

Let's talk about atomization first. This is something that my paper spends quite a bit of time talking about. We're living in a world and AI system is being developed in a world where decentralized economic activity is becoming more and more common and ever easier. You've got GitHub as an example. People all over the world can collaborate on programming projects and create code. You've got the additive manufacturing and maker movement, fragmentation of news sources. Customization and personalization are bywords now in virtually every industry, and I can't believe that I didn't put up on here the contingent workforce and the sharing economy, which is leading to a decentralization of the labor market on a scale that hasn't been seen since industrialization.

There are a set of related problematic features of digital age development and research that also make regulation more challenging. There are four of them that I'd like to discuss. The first is called discreetness, and that refers to the fact that risky activities related to digital age technologies, including AI, can be done in locations and using methods that escape detection by regulators. When you've got a big nuclear reactor, you know where you need to focus your attention in order to manage the risks associated with it. With digital age technologies, you don't necessarily have that large physical footprint and the visibility that regulators like to have in order to know where to focus their attention.

There's also another form of discreteness, and I had to check several times to make sure I was spelling these two different forms of discreteness correctly. This relates to the fact that there might be risks that stem from the interaction of different components created at different times and places without conscious coordination. Really, this is atomization at its core. Then there's diffuseness, and this refers to the fact that those discrete designers and manufacturers of components, they may be located in different jurisdictions all over the world. There's no one legal authority that has the ability to regulate all of their behavior. The last one is opacity, and it refers to the fact that regulators may not be able to discover or understand why these things are risky. There is a particularly acute issue with that in the context of machine learning because, as I mentioned before and as was mentioned in much greater detail yesterday, sometimes even the people who design the systems don't fully understand why the machine does what it does.

On the other side of the ledger, you've got concentration. We've talked about the atomizing forces. My paper doesn't talk as much about concentration, and that's in part because the amount of concentration that we have seen in the tech sector has grown, it seems like, exponentially, in the past couple of years. The revenue of the big five tech companies in 2016 was $556 billion. That was their revenue. That is more than the GDP of Argentina. These companies are really engaging in economic activity on a level that we probably haven't seen from too many industrial actors since the end, at least, of the Second World War. These companies have access to data that, in some cases, have a level of detail far exceeding that of the governments who are tasked with regulating them. Google probably knows more about you than the United States Census Bureau does.

Perhaps less problematic...these things are maybe less problematic in the decentralizing forces, though, in my opinion, and the reason for that is that traditional formal regulation was built in part because there were very powerful companies that were creating risks, and there was no entity in existence that was big enough to stop them. That's the reason, in a nutshell, why regulatory agencies were set up in the first place. We could conceivably scale up the existing regulatory machinery to deal with the forces of concentration. It's hard to see how simply scaling it up is going to help with decentralized and atomized economic activity.

So, having talked about all the reasons why this will be challenging, I want to focus on starting a discussion on how we might go about fixing this. I think an important component of figuring out effective forms of risk management is understanding what the different potential actors in the regulatory world are good at and what they're not so good at. We'll start with institutional competencies of legislatures. The good thing about legislatures is that they have democratic legitimacy. Because they're directly elected, they have the best claim to represent what society at large wants to have happen. As a result, they're really the only institution that are capable of setting policy on a society-wide level.

On the other hand, legislatures are inherently generalists. They don't have expertise in any one particular industry or with respect to any one particular problem that society has. Typically, they have to rely on things like committee hearings and from contact with industry groups in order to get a sense of what the risks are in any particular industry. That's not the most reliable way to get information, and in addition committees have become less powerful, and the effectiveness of committee hearings—if you watched the recent Facebook hearings, you'll know exactly what I'm talking about—is questionable. Fortunately, legislatures do have the ability to delegate and an associated power of oversight. Even though legislatures don't have themselves the expertise when it comes to these particular forms of technology, they can delegate their authority for regulation to an agency or other body that does.

So, agencies, these are the biggest innovation of the industrial era in terms of risk management. What is unique about agencies? Well, specialization and expertise. They can focus all of their time and efforts on a single industry or a single social problem, and they have expertise. You can staff them with technocrats and people who have relevant experience in the industry at issue. That being said, this is not so much an advantage in the context of emerging technologies because there are so few people who have the relevant expertise. Those people can get usually a much higher premium for their services than they're going to be able to get from a government agency.

Another feature of agencies is that they have a more flexible structure than courts and legislatures. The structure of courts and legislatures is largely static, and that's not going to change any time soon. You can set up individual courts to address new problems, but typically they work largely the same that existing courts do. With agencies, you have a bit more flexibility. Another feature of agencies is that because the people who lead them are not directly elected, they have some insulation from the political pressures that the other branches of government face. But there's a downside to that, which is that because they don't have to answer to the voters, because they don't have to explain themselves or have direct contact with the public, they can be out of touch, and they can become too cozy with the industry actors that they do have regular contact with.

Another feature of agencies is that they have the ability to take action ex ante. Legislatures rarely can react quickly enough to respond to rapidly developing crises. Mike Lotito, a senior attorney at my firm, who's the head of its Workplace Policy Institute, loves to say that Congress is good at two things: doing nothing and overreacting. So, the reality is that if you want ex ante action, you typically are looking to somebody other than a legislature.

Courts. I have a soft spot in my heart for courts. I was a judicial clerk for four years, which is one year shy of the point at which they write you off from actually being a practicing lawyer for the rest of your life. I have the utmost respect for courts, and I would love to be able to tell you that courts are ready to manage this problem, but they have their own challenges, too. Court specialty is fact finding and adjudication. This means that they're very good at finding out what actually happened in a particular case and making a decision about the right thing to do in that.

They're not so good at deciding generally what usually happens in a class of cases. That's a different sort of problem that courts are not well equipped to handle, and that's in part...and we'll get to this in a moment, I'll actually skip to it now and then circle back. Legal rules in courts are developed incrementally. They're resolved through the resolution of individual disputes. As a result, you don't have these kind of broad social considerations that ever really make it in front of the court because, in theory, the court's only really interested in what happened in the particular case that's before it at the moment.

Another feature of courts is that they're reactive, for the most part. They have some power to take action ex ante. In reality, that power is rather limited, and courts usually only come to the fore after harm has occurred. And as a result of when they enter the picture, courts tend to be reactionary. They don't like new forms of risk. They treat those new forms of risk more harshly than older, more familiar forms of risk, even though those older and familiar forms of risk might create greater risk to society as a whole than the new ones.

Another feature of the court system—and this is my biggest pet peeve of the American legal system more generally—is that incentives are misaligned when it comes to managing public risks. Plaintiffs' lawyers choose cases not based on, "Hey. Let's optimize the best level of risk for society. Let's have a maximum social utility way of seeking justice in this country." They're focused instead on getting cases where they can get a lucrative settlement or a favorable verdict, and that's not unique to plaintiffs' lawyers. Lawyers in general focus on achieving victory in each particular case, not on providing courts with the information that the courts need in order to make good law.

As a related reason, because of these issues and also because there are way too many people with postgraduate degrees in the United States, it's all too easy to find somebody who will swear to whatever it is that the lawyer wants them to swear to, no matter how insane it is. Unfortunately, I've had the misfortune of seeing that happen many, many times in the courtroom in real time.

This is kind of a changing gears slide here. I mentioned a couple of times that with AI and with machine learning, we don't always understand what it is that's going on, and sometimes even the designers don't. That seems like a particularly unique challenge for regulators, but I actually want to explain why I'm not as concerned about this particular component of AI and of emerging technologies in general when it comes to public risk management. That's because this isn't a new problem. We have, for decades and centuries, been managing the risks associated with technologies that we don't fully understand.

Pharmaceuticals is my favorite example. The smallpox vaccine was developed in the late 1700s. At that point, not only did we not understand the mechanism of action of the smallpox vaccine, we didn't even know that it was caused by a virus. In fact, we didn't even know what a virus is. In fact, we didn't even know that germs were the cause of disease. But nevertheless, basically our engineering, our recognition of how we build an effective treatment, outpaced our understanding, our conceptual understanding, of how that worked. Even more recently, there are plenty of psychiatric drugs and other drugs that are out there where the people who develop them don't have a complete understanding of their mechanism of action. They just understand that it works.

How do we manage the risks that are associated...looks like I'm having a lag again. This is about to jump a full slide ahead, so I'll just keep going. Told you. All right. Of course, this is the problem with PowerPoint presentations. You lose where your actual head is at if...there we go. Okay. How do we manage that risk? Heavy regimented regulation, the FDA [Food and Drug Administration]. That's how we manage the risks associated with pharmaceuticals. A product had to undergo rigorous testing and be proven safe before it could be marketed. Now, am I suggesting that we need an FDA-type system to manage a risk associated with AI? Absolutely not. All I'm saying is that the fact that we don't understand how something works is not an insurmountable impediment to coming up with an effective form of risk management for it.

Now, what would an effective form of regulation for AI look like? Well, I talked about, in my paper, what is a more traditional form of regulation, and it actually in some ways does resemble the FDA. The difference is the proposal that I make in my paper is not that if you don't go through the approval process, you can't market your product. It's that if you do go through the approval process, you will get limited liability for any harm that is caused by that product. If you take the time and effort and you wait to market it until you get it certified and audited as safe, then you'd get to sell it, and if you don't do that, then you're strictly liable for any harm that it causes. So, try to create to use the incentive of liability rather than direct regulation as the form of risk management.

That being said, I put that out there, and I put this out here as well, more to start a conversation than because I am confident that that's the appropriate approach to take for public risk management. The entire point of my paper and the entire point of my entire focus on AI in general is that these are difficult problems that nobody has a good answer to right now. It's important to get talking about potential solutions.

Another potential solution that I'll wrap up with is crowdsourcing. The general idea here is to require transparency, rely on stakeholders and the public at large to bring potential risks to the attention of government. The inspiration for this is the EU's regulations on chemicals, or REACH [Registration, Evaluation, Authorization, and Restriction of Chemicals]. When I say transparency, I don't mean in the sense of being able to explain why it does what it does. I mean just providing enough information that you can meaningfully assess the risk associated with the technology. There are, of course, information...excuse me...IP [internet protocol] and security concerns when you have this form of transparency or this level of transparency, but that just may be the cost paid to avoid public risk, and I think most companies may prefer to have to deal with those risks than waiting to do nothing and have Congress overreact and establish a FDA-style regulatory regime over the industry.

What would crowdsourcing regulation look like? I think about it a lot like Wikipedia. You'd allow users, competitors, and members of the general public to report potential risks, and the idea is that if everybody is taking a look at this, you'll get it right eventually. By making relevant details of the AI systems available to everybody, the chances of risk detection are maximized. This is, I think at least, one way to think about how risk management could work in a world that is both facing decentralization and concentration at the same time.

With that, I'll say thank you and turn it over to Greg.

Larry Wall: The floor is yours, Greg.

Greg Scopino: All right, thank you. Thank you very much. Aha. All right. Hopefully this'll work for me. My name's Greg Scopino, and I'm a special counsel with the U.S. Commodity Futures Trading Commission [CFTC], but in my personal capacity, I've written several Law Journal articles about issues related to automated systems and artificial intelligence in the financial markets, and I'm putting finishing touches on a book on those topics. I also teach currently at Georgetown University Law Center. With that, I just want to say, of course, everything that I say here, these are just my personal opinions. They don't represent the commission, the individual commissioners, or other staff, including the wonderful LabCFTC fintech people at my agency. This is primarily drawn from the research and the work I've done.

I was tasked with responding to Matt's wonderful paper, and I'd first like to, of course, thank the Atlanta Fed for having me and Matt for writing this paper. His proposal, as outlined in the paper, was the creation of an artificial intelligence systems regulatory agency under an artificial intelligence development act, so basically a federal agency to certify the safety of AI systems. As he mentioned, it would have a tort law approach, strict liability for uncertified AI systems, negligence liability for certified AI systems, and a fund to compensate victims of insolvent firms that used harmful AI systems.

So, a couple things. One of the things that I liked about Matt's paper was the concept of public risk. I confess I had not run into that before, the idea of mass-produced threats that individuals are unable to avoid. I thought that was a very fitting analogy for the current state of some of the issues confronting financial markets right now. Although, I will say I personally—and perhaps because I am a regulator—don't view it as necessarily a bad word, in large part because, as was discussed referencing the institutional capacities of different branches of the government, Congress often will intentionally legislate very broadly and then leave it up to technocratic agencies to determine standards for safe drinking water or for other safety and regulatory issues. I think that's a sensible way to approach things. For me, regulation is less of a bad word.

One of the things you're all probably familiar with is that in recent years, there have been a series of high-speed market disruptions called flash crashes. I actually started with the CFTC the very week after the May 6, 2010, flash crash. At that point, I was an enforcement attorney, actually. Since that time, there have been more and more of these events across numerous asset classes. I'm sure many of you are very familiar with it. There's been flash crashes in silver, in gold, in virtual currencies, and it's entirely possible that automated systems might not be the sole cause for these events, but the very fact that they're happening so quickly, faster in many cases than human reaction times, means that, to some extent, these programs have to be playing a role in it.

I think the idea of public risk maps well onto some of the challenges that are facing financial markets today, and as our speaker from the Financial Stability Board, from FSB, referenced yesterday, they put out a report in November of last year in which they talked about potential systemic risk issues related to financial services, artificial intelligence. They said that it's possible the applications of AI and machine learning could result in new and unexpected forms of interconnectedness between financial markets and institutions and could create new systemically important players. That raises issues of whether the regulatory perimeter needs to be expanded, and that was part of the discussion that was referenced regarding third-party vendors, several discussions, one of them with Scott Bauguess of the SEC [Securities and Exchange Commission].

First, a confession. Although I am here to critique, as it will, Matt's paper, the focus of my own scholarship, if you actually read it, I've more or less made arguments that are very similar to the things that Matt himself has suggested in his paper. For instance, I have suggested that the use of tort law as a mechanism for supervising automated trading systems and artificial intelligence and financial services would be useful. Part of the reason that I think that's going to become critical in the future is because, as Matt mentions, many of the existing causes of action for forms of market disruption or market misconduct require proof of mens rea, require proof of a culpable mental state, an intent, and that intent has to be found in the brain of some human.

In the case of automated trading systems and overall automated financial systems, it's going to be...one, it may not exist because, as Matt's observed, someone could program a system not intending that it engage, for example, wash trading, which is a form of self-dealing where you trade with yourself. It's been illegal in the futures market since 1936. Basically, this is something that has become, in some cases, arguably more prevalent with the advent of algorithmic trading, but you have to prove intent. In other cases, there may be situations where a firm and its actors intended to engage in this misconduct, but it's going to be really hard to prove because we may be moving to an era where there are no longer going to be emails or other sorts of messages indicating this culpable intent. All we're going to have is a source code.

In some cases, as we've learned, the program itself will adapt with time based on its interactions with other systems in the markets. I have sort of advocated for the expansion of tort law and negligence type principles in this area, and I've written about what have been called digital intermediaries, basically automated systems that when you map out all of the individual tasks done by regulated market intermediaries, these automated systems, these algorithms could do them, and such that you would no longer need humans. I've also spoken positively of the idea of an identification or certification system for automated trading systems and algorithms. This was in the context of commenting favorably on the CFTC's 2013 concept release.

Therefore, many of my ideas seem to be very similar to what Matt argues. What do I have to say? Well, I think I have a lot of suggestions regarding how to bring some of these principles into our sphere, the sphere of regulating financial institutions, the area of regulating financial markets. Now, one of the problems that Matt identified in his paper was the issue of defining what artificial intelligence is, and I agree that, in many cases, definitions are some of the trickiest areas of the law. We can see that right now with initial coin offerings and whether they're securities or whether they're some other form of financial product. That's a natural tension because these definitions, of course, trigger jurisdiction of some agency or some law. Therefore, it makes them controversial.

I think in the near term, that's not necessarily going to be an insurmountable problem. I think it could be solved several ways. One is not to let the perfect be the enemy of the good. I think also because in the near term, the focus is not going to be that an automated system is acting intelligent, and that's why we'd want to regulate it, because it's engaging in specific types of activities, and just one example. One example is something that I teach in my class is a case called CFTC versus Vartuli. It's 2000, and this case actually is often cited in textbooks or other items about automated systems. If a person sells a software program that when loaded onto customers' computers tells them when to buy and sell futures contracts, then the person who's selling that software is a commodity trading adviser, and they're regulated as such. That's not new. That's 2000. You can look at the case yourself, but this raises issues again mentioned by the Financial Stability Board of how far to extend, if at all, the regulatory perimeter.

With Vartuli, you had a situation where they were making this program. It was called Recurrence. His name was Anthony Vartuli, his company was AVCO. He had another principal, and he was selling this program and also making it to what if it was somebody who just wrote the code? What if he had a third-party vendor that helped? What, if any, regulatory requirements would apply to those people? Briefly, in the Vartuli case, how did this get to the Second Circuit Court of Appeals?

As you can imagine, it was enforcement action. It was fraud. They were advertising that Recurrence was simply a way you just get a brokerage account with a futures broker, load Recurrence onto your computers, and then just watch the money roll in. In fact, they claimed that in the past they had made $2,500 trading one, only one, Swiss franc futures contract. They said Recurrence turned that into $130,000 in short order. You're probably wondering, "Why have they not invited Anthony Vartuli to participate in the next panel?" Well, that's because, in actuality, when you loaded Recurrence onto your computers, it probably started losing money and wouldn't stop until you shut it off. It was not very successful, and they continued advertising it as a way to make money, notwithstanding the fact that customers were complaining, "Hey, this thing is losing me lots of money."

I think, in the near term, the issues, especially for people in this room, are going to be, is the AI system acting like an investment company? Is the AI system acting like a broker, or is the AI system acting like a bank? I think that will sort of be more critically important than just an overall definition of artificial intelligence.

Now, regarding the idea of an independent agency for artificial intelligence, this is an idea that's been floated also. The European Parliament suggests a regulatory body for AI and robotics. Personally, I'm not sure that an independent AI regulator would be the best regulator for financial market intermediaries or financial institutions. Now, maybe the financial regulators could work cooperatively with an AI. Frankly, having more regulators work together does tend to slow things down. Here's what I expect as a different outcome, and just by way of background, this is a Chicago Board of Trade back in the day. Exchanges don't look like this anymore, actually. I'm sure most of you all know that. They don't even really look like this. That's a NYMEX, New York Mercantile Exchange in the '80s. They aren't like that at all, and I don't have a slide for what they do look like because it's just computers. By and large, the financial markets for the most part have become automated computerized systems. Most of the trading pits for futures and many types of derivatives, they're closed, and they're probably not going to come back open.

In my opinion, I think financial regulators are already AI regulators, if not on their way to becoming artificial intelligence regulators, and you hear that leaders of the exchange NASDAQ, and even the leaders at Goldman Sachs, they repeatedly say, "We now view ourselves as tech firms. Our businesses are tied to data." I think the reality of what's going to happen is maybe you might have another type of artificial intelligence regulator, and that will join the other AI regulators like the SEC, the CFTC, and so forth. In this vein, frankly, although I'm not in any way anti-fintech, whenever I hear the term fintech or even subtech, it reminds me of something. It reminds me of my high school, and I'm probably maybe older than most of the people in the room, because when I went to high school, people would say, "I wrote my term paper on a computer." If you said that now, people would think you're crazy. "What do you mean you wrote it on a computer? Of course. Cuneiform, tablets?" What I think is that the entire financial markets are going to be fintech. It's going to be almost redundant.

Now, although I, myself, have advocated the use of tort law approaches to liability, there are some problems with this. For example, yesterday [Yesha] Yadav of Vanderbilt Law School has argued that, for financial markets, she believes that liability frameworks are going to be ineffective. She believes that due to the nature of complex systems, that a certain level of error is endemic and that people can act reasonably and still potentially cause outsized harms. Her proposal is to put liability on the exchanges themselves as well as coupled with a joint fund, a market disruption fund, something similar to the Securities Investor Protection Corporation or Federal Deposit Insurance regarding market disruptions, market crashes. Some other people have critiqued the use of torts approaches in intentional negligent and supervisory as being insufficient.

One thing I will say regarding liability—and I'm a fan of using common law, tort law approaches—is I think it'll do a lot. It's a workhorse in the law, and it'll get you many things. I don't think it'll get you everything, though. For example, already exchanges and intermediaries, as I said, it's by and large a collection of computerized and automated systems, but I think tort liability is going to be an inefficient way to, say, regulate bank capital levels or even standards and procedures for exchanges, for clearinghouses. There is, in the derivatives sphere, these core principles and regulations that require things such as the integrity of transactions that relate to the composition of boards, of futures exchanges. I think some of these sorts of objectives are something that tort liability is just not going to be able to do.

I also think there are limits of tort liability regimes to handle things such as ethics, proficiency requirements, and fitness. In these markets, there's mandatory ethics training. There's proficiency standards and fitness requirements. Now, some people may think, "Gosh, we now just have computers. So, we don't have to worry about ethics anymore. It's kind of like Mr. Spock. He can't lie. These things are not going to rip people off. We're not going to have to worry about ethical requirements anymore." In all honesty, I'm not so sure that's true. I'd like to say it were otherwise, but already we have AI systems that are selling people things. I've been surprised to see more and more reports come out showing that they sell better than humans. It used to be thought that sales was an inherent area where humans would have an edge, but what they're finding is that what algorithms lack maybe in common sense and cognitive abilities, they make up for in persistence and in the ability to operate 24/7 and go really, really quick.

I recently read one thing that said now, it used to be they could automate almost all of spear phishing, which is like a very specific type of spam targeted to an individual. They are now finding that there are algorithms that do better than humans at spear phishing just scraping things off social media, putting things together, and then sending out 1,000 emails, whereas a human might try three times and say, "Well, they didn't answer." I think this is only going to become a more important issue, and at least in the financial market sector, one of the large areas and reasons we regulate are people that interact with customers and that are selling things.

Another issue is that the concept of fraud is actually very much like negligence. It's a broad bucket which you can put a lot of things into, and it basically covers any sort of taking of somebody's property that doesn't meet the literal definition of theft. That includes material missions. It can also include trading practices that give a false impression to other market participants. Even beyond fraud, you have suitability requirements and prohibitions against high-pressure sales tactics in some cases with self-regulatory organizations. I think these are still going to be issues, and that raises the question how you certify the safety of algorithms in AI systems in the context of ethical requirements and the context of suitability requirements. I think it's going to be challenging.

Regarding the issue of certifying artificial intelligence systems, I think that's going to be extremely challenging in the near term. Just as an example, the CFTC proposed a source code repository for trading algorithms and experienced strong pushback from the industry. Let me just say “experienced strong pushback from the industry” is kind of an understatement. The more word was like notorious. Anger might be a better word. Now, they had some legitimate concerns: proprietary information, intellectual property. Also, it's hard for the agency to keep attorneys and economists, so some of these companies will say, "Yeah. We'll give you our algorithm," and the economist that evaluates it is going to be working for one of our competitors next month. Another problem is that the federal government has been hacked. I've been hacked multiple times but by the federal government like twice, so yay. But another issue that was raised was would the CFTC even have the capacity to understand and deal with the information in these code? That's a legitimate issue, so I think it's not politically feasible.

The current chair, who's very technology forward and has a rather bold vision for the agency, he himself has been against these sort of proposals and even against, generally from what I understand, not in favor of certain components in the prior proposal. I think any sort of certification requirement or disclosure requirement, I think that's going to face stiff resistance. Just food for thought, my agency receives overarching directives from things like the National Archives and Records Administration [NARA] and the Office of Management and Budget [OMB]. What I'm wondering is it's possible maybe that some independent AI regulator or maybe OMB or NARA could end up...some similar entity...could start promulgating guidelines for federal regulators in the use of algorithms, and that's going to be a big issue, I think, coming forward. Other than that, thank you very much.

Wall: So, let me turn right away to the questions, and one of the first ones is pretty popular and I think raises a pretty interesting point. Much of this conference seems to presume that as U.S. institutions and values that will shape AI diffusion, what's to guard against a transnational race to deploy, in a single nation, ring fence, or will we have the systems developed elsewhere, perfected outside our control, and then imported here?

Scherer: Well, this is the diffuseness issue that I talked about in my paper, and I agree. It's going to be very difficult for any single country to ring-fence the risks associated with AI. As far as it being a transnational race to deploy, that's absolutely the case. We're already seeing that right now. You've got Vladimir Putin last year saying that whoever masters the field of artificial intelligence will rule the world, and China has a very aggressive strategy in place for artificial intelligence development over the next 10 years. We're already seeing that, and it's particularly scary...and this is one of the things that the IEEE has a lot of people working on, the idea of autonomous weapons systems. A bit outside the ambit of this conference, but that's perhaps the zenith of the concern about a somewhat overanxious race to deploy these technologies, is that once you start having an AI arms race in the field of armaments, then everybody's going to have to develop them or else they're not going to be able to keep up with the military capabilities of competitors.

I think that absolutely this is the concern, and again I'm much better, as a lawyer, I like to think that I provide my clients with not just telling them what their problem is, but how to solve it, but this is an instance where it is much easier for me to identify the challenges than it is to identify particular solutions that will be effective.

Scopino: And I would say that I think cross-border or transnational issues are some of the most critical ones facing current financial markets, and I think that'll continue. Not irrespective of AI, and that's why I think it's good that there are things such as the Financial Stability Board and hopefully can avoid things like fragmentation and regulatory arbitrage, but it's a challenge.

Wall: Hopefully, we can get in a couple more questions. Another one that received a fair amount of attention, do we let machine learning models' designers off too lightly when it comes to understanding their models? My understanding is there are methods to pick apart the factors leading to model outcomes.

Scherer: I don't pretend to be an expert in the technical side of machine learning. I have spent time after my wife and daughter go to bed trying to bone up a little bit on the technical side of it, so I won't claim to be an expert, but frankly, the answer that I understand to this is no, not always. It's not always possible to do that. When you're dealing with modern machine learning techniques where you have deep learning that uses hundreds of layers of nodes or artificial neurons or however you want to characterize them, maybe it's theoretically possible that a human could trace every step that the machine took in order to reach the conclusion or to make the prediction that it does, but in practical terms, there's just not going to be a human being who has the ability to keep up with the computational power that the system is able to bring to bear. When you have the algorithm essentially adjusting itself hundreds of thousands or millions of times, it's very difficult to keep track of all of the factors that the machine is taking into account.

That's not to say that this isn't the challenge, and I actually used to be the one who would ask a question like this until I got told by enough people in the world that machine learning like, "No. You're asking us to do the impossible."

Wall: Okay. I'm going to pull up one more that hits at an issue that you were just starting to touch on, that ATS [applicant tracking systems] are certainly algorithms, but are they learning, i.e., self-improving? If not, might the existing regulatory structures be less prepared to take on machine learning and automated systems than an IT specialist regulator? I guess where I'm coming from on this is, is if machines are constantly learning, in what sense can you talk about whether they're performing properly at a particular point in time?

Scopino: No, absolutely. It's going to be a challenge. My understanding is that, yes, it's similar to what we saw in the first discussion, depending, of course, on the program, but that they can react and change their program based on information they receive from the markets. As far as should there be an IT specialist regulator, I still think it's going to be a challenge just related to...how should I put this? I think there's so much inherent in our area of the world. For example, sometimes when I'm talking with a work colleague outside of work about certain things, some of my wife's friends or nonwork friends, they don't understand what we're talking about. We're talking like another language. Now, maybe you could get somebody who knows artificial intelligence and all of the financial talk. I mean, I don't know how many of those people there are. Maybe you could group people together. I think it would be a challenge.

I think what we're already seeing in my agency is basically there are people working in data. There are people working in market oversight that are more or less, how should I put, very familiar with these type of issues. I think it'd be easier to build capacity there, understanding it's going to be difficult, frankly especially with the limited budget we have. So, it's definitely going to be a challenge, but I would say it's something that we're already doing.

Wall: Matt, do you have anything to add?

Scherer: I don't know enough about the way that automated trading systems currently operate to say for sure whether or not they qualify as learning, but in some sense, you're using machine learning any time you're using regression, and certainly when you're using a gradient in order to gradually improve a model. In that sense, anything that uses those techniques is a learning system. As I understand it, that certainly is the case of those algorithmic trading systems out there that use those.

Wall: Okay. With that, thank you very much, Matt and Greg. Very good session. I learned a lot.