2018 Financial Markets Conference—Policy Session 4: Machines Learning Investments

Machines have been learning finance for decades, and algorithmic, high-frequency trading has received much attention. However, recent developments in data availability and storage, computer speed, and learning techniques might dramatically change and accelerate technology's participation in the financial system. This session will explore the strengths and weaknesses of using ML to design and execute portfolio strategies.

Transcript

Chester Spatt: Our first speaker will be Rishi Narang, who is the founding principal of T2AM.

Rishi Narang: Hi, everyone! You guys hear me okay? All right. Thanks to Lisa [Lee-Fogarty] and Paula [Tkac] and everyone for having me. I'm going to fly through this thing. Here we go, all right. That's what we're going to cover. We're going to cover where machine learning is successful, sort of some examples. We're going to talk about what we mean when we say investing. My goal, actually, in large part is to deconvolute this question of what we mean when we say "machine learning in investing." We're going to talk about these areas where we've seen some success of machine learning techniques in finance and in trading and investing, and then we'll conclude. Rock and roll. All right.

Machine learning is showing up everywhere—self-driving cars, a good example. You can see here this car with all kinds of sensors detecting other cars and lights and whatever. Highly interesting area of work in machine learning, image recognition, games. Recently, I think you guys might know that Heads-Up No-Limit Texas Hold 'Em was solved and that was a fun little holy grail for a lot of people. Malware detection, reading legal documents, natural language processing. I guess an interesting question is why. Why is machine learning working so well in these areas?

The biggest reason in my opinion is that there's an effectively unlimited amount of data—meaning if you need to play more games of Go, you just play more games of Go. If you need to tune up your algorithms, ability to drive in wet weather, you just send some cars to Seattle, you can just create more data. There's also a high degree—and this was said I think yesterday as well—high degree of stationary in the problem, Go is the same game as it was. It doesn't really change. Chess is the same game as it was. What this means is that you can cross-validate relatively easily.

The other part is that, and I think this was also mentioned yesterday, the algorithms used to solve these games and problems are highly tailored. There's no such thing as generalized intelligence at this point in the machine world. These are all very finely tuned, carefully worked-out algorithms to solve very specific problems. You couldn't use the chess one to do something else. It's good at chess and it's not really good at anything at all other than chess.

Let's take a step back from machine learning and just talk about investing for a minute. This is what investing is. Actually, I'm going to take a step further. This is the first time I've said this in front of a bunch of economists. I actually think that these things, these functions here, are all the functions that any allocator of scarce resources does, whether they mean to or not, whether they subsume some of these things or assume some of them away. Go through this as quickly as I can.

Data on the far left, very important. Without inputs, you have no hope of output. Without good input, you have no hope of good output. Also, starting with data is very interesting because you can learn or you can figure out a lot about what you're going to do given what you have to work with. If I'm going to make a meal, I look in the fridge. [If] I have nothing but vegetables, it means I cannot cook a steak dinner. It's not going to happen. There's no steak. You've got to know what you have, and it's got to be appropriate. Data drives everything in investing and most other things.

Then there's this box around all of the rest that says "research." The idea there is that you want to know why you're doing things in research, in systematic investing and particular is the sort of driver of that answer—the why.

What do I mean by research? We primarily mean back-testing. You come up with an idea, you test whether it worked in the past, and you have enough belief that the past is similar to the future that you're like, "Fine, if it worked then, it'll probably work going forward." There's a lot more to it than that, but that's the kind of rough summary.

As to the individual pieces, there's alpha, risk, and t[ransaction] cost up at the top. There's a little thing called mixing underneath alpha. Alpha is an interesting term because as conventionally defined, it's the residual part. It's a definition by negative. It's the part of your return that you can't explain by beta, meaning the return of the index times your exposure to that index.

I don't love that definition for a variety of reasons, but what I will offer is an alternative one that is not so measurable but is at least a positive definition of alpha. Alpha is the return that comes from skill at timing. The two important things that you can time are what you hold and how much of it you hold. Skill at timing the selection and sizing of portfolio holdings is what alpha is.

When we talk about mixing alphas, imagine that you are a portfolio manager and you have two analysts: one who reads charts, and who prices charts and says, "Oh look, there's a head-and-shoulders pattern or there's a trend or something," and the other is a value-based person who looks at fundamentals and talks to managements. They say very different things about the price of IBM over the next one year. Who do you listen to and what weight? This is a question of mixing. This will come up again in portfolio construction in just a minute.

Risk is an interesting topic as well because on some level most of these things are about the selection and sizing of exposures. Risk is important to understand because these are exposures you don't want to have or you want to limit. Cost is an interesting one as well. How much will it cost you to make a change in your portfolio? If you're trying to time, that inherently implies that you're going to make some change to the book, so how much does it cost to make that change?

Portfolio construction bit is a little like a CEO. If we anthropomorphize these other boxes, alpha is like the product team or the sales team, very optimistic like, "Look at this cool thing we can do." The risk side is more like the legal team saying, "Oh boy! It's scary to do anything. We should probably just shut down so we don't get sued." Transaction cost is like the accounting department saying, "Well, everything costs." The CEO has to sit in the middle and say, "Right, given opportunity risk and cost, what do I actually want to do?" That's the portfolio construction bit; it is balancing those three factors among other things, perhaps other constraints. The resulting output is a target portfolio, which you compare to your current portfolio and the differences between what you want to hold and what you do hold are what you need to trade. And those things go into an execution algorithm.

I mentioned at the beginning of this slide that my sense is that these are the same functions that anyone does when they're allocating scarce resources. If I'm the head of a fastener company—some metal fastener company—and I want to make a different kind of fastener, some team has told me that's a good idea or I thought of it. I have a lawyer, I have an accountant, and I have to decide given how much it costs to make new machines or get new machines to retrain staff, to hire new staff, etc. Given the risks of whatever kinds of—I don't know anything about fasteners, so I'm out of my depth here—but you get the idea.

These are the same functions I think everyone goes through. What's cool about systematic investing just as an aside is that because computers are blank slates, you can't assume much of anything away. They don't know what a stock is, they don't know what it means. So I want to buy cheap stocks. That means nothing to a machine. You have to specify what do you mean when you say cheap. What's a stock? You have to specify every single thing which encourages a great deal of intentionality, thought, and research. Random aside—back to machine learning.

When we talk about using machine learning in investing, it theoretically could be any one of these little boxes or arrows. The thing that people focus on is alpha, which is ironic because that is not actually where it's most used and certainly not where it's most successfully used. So where is it used? It's mostly used in that arrow between data and the rest. I'll come right back to that, and in the mixing bit. It is not that often used in the alpha part, though sometimes—and I will say used sometimes—and the jury is very much out as to the efficacy of using machine-learning techniques as direct forecasting tools. Not that much used in risk, not that much used in transaction process, certainly statistical learning that happens, but it's not machine learning proper. It's not fancy learning or fancy statistics with feedback as someone wrote in a blog once.

There are some techniques from the field that are applied in portfolio construction sometimes but not that often. Basic needs variance optimization is the most commonly used thing in portfolio construction or else something that gets at equal weights. Increasingly, we're going to see use in execution, but up til now not so much. Really mostly in that arrow between data and the rest and mixing signals.

We're going to focus on alpha for a minute and talk about why even though it's the thing everyone thinks about when they think about investing using machine learning, why it doesn't work so well, at least thus far. I'm not so sure many of these problems are tractable just given more data or more time or better machines or even better algorithms. The first is that this is a different kind of problem than the successful and all solutions are trying to solve—come back to all of these things.

The second is that cross-validation is very difficult. The third is that capital markets are very complicated and forecasting power is always low with the high r2 out of sample is on the order of 0.03, 0.04, which in most social sciences is a laugh. My wife is a research psychologist and she cracks up when we talk about what we deal with. If you have a 0.01r2 out of sample, you probably are going to go to jail. Interactions among participants make the markets change as you interact with the market. It's a dynamical system as well.

Let's sort of dive into these, the different problems. These are, first of all, time series problems, not IID [independently and identically distributed] problems. One game of Go and the next not that related. The price path of Google absolutely is important to understanding the future price of Google. The second is we have not so much data. There are two billion monthly Facebook users, at least there were recently. I don't know how many have left. There are 10,000 to 20,000—I'm being very generous with that estimate—tradable instruments in the world, globally. That's if you have the capacity to trade every single thing around the world.

Also, when you think about cross validation, you're trying to build a model on this set of things and test it on that set of things whether the things are time chunks or types of instruments. Even within the U.S. stock market, even within the large-cap U.S. stocks, would you really want to build a model on Google and test it on Walmart? You actually expect that that would work? Probably not.

The third is that the things themselves change over time. This little pie chart shows that Disney's revenues in 1972 were one-third derived from things other than theme parks and two-thirds derived from theme parks. Today, it's exactly the opposite. It's two thirds not theme parks, and one third—sorry, yeah, two-thirds not theme parks and one-third theme parks. Things change. They're not that comparable to one another, and there aren't that many of them to begin with. Cross-validation is a really serious problem in our world.

This is just a basic r2 thing showing like in many sciences, 0.08r2 is completely achievable in our world. There is a nice, crappy cloud of dots around the 0.03 line. Interaction effects are pretty serious. This is going to show away an order book video. Some of you may have seen this before. It was given to me by a friend at Blueshift, it used to be Tradeworx. What you see is that there is this order book, and I'm not going to get into all the details of the visualization here, but as soon as one thing changes you can see how quickly the order book changes. This is happening in milliseconds, in microseconds.

The interaction effects happen at two levels. One is the microlevel where my order literally interacts with other orders. Another way that interaction effects happen is much slower and longer term. You can think of this the same way you think of profit margins in some new line of business. I come up with some new line of business, the profit margins are high. People will get attracted to high profit margins so they start to compete. I end up in a business where there's declining profit margins. At some point, they may even go negative, at which point there is consolidation. Instead of stably high profit margin in this line of business, now I oscillate around some much more meager average.

That happens with strategies. Things that used to work really, really well, people figure it out, they don't work so well going forward. Why is that? It's because the market adapts to the fact that there's free money or reasonably free money being made. As you play the game, the game changes, which is not a fun thing for machine learning to try to solve when you're actually trying to forecast, especially given those other factors, cross-validation and noise and all that.

Getting into where machine learning actually works best in investing currently and probably for the foreseeable future, the first and most important thing is that there are a lot of kinds of data that are currently being used that didn't use to be able to be used. They're usable because of techniques like natural language processing, image recognition, speech recognition, and so on. Transcripts of calls, actually listening to calls and transcribing them and then analyzing them to detect sentiment and other kinds of important bits of information.

I should say too that when we talk about data sets and novel data sets, they're really just getting at the same kinds of things, the same kinds of strategies that you were doing without them. A good example is there's all kinds of fun data sets around customer behavior in the retail sector. There's e-receipts data, there's credit card data, there's all kinds of data sets that are being sold now to hedge funds.

What is the point of these data sets? It's basically to identify which companies have accelerating revenues, which you can do a lot of other ways. This is just a more timely way of doing it. I know I'm not trying to trivialize how important that is, because as I said the whole point of alpha is scale at timing. If you can find a way of getting more accurate or more timely estimates of growth, then that's good, but you're still just pursuing growth. It's not a new strategy, just better implementation of an old strategy. This is a highly successful area for machine learning and finance. I call it finance or shorthand, I just mean in investing.

Signal mixing is another one. There are firms that have hundreds or thousands of signals. Some of them are multicolinear, so there are problems with that and the more traditional multiple regression world. Those are much more easily solved and better solved with certain machine learning techniques or techniques from that field. Some of them are not new.

This is an area as I mentioned that I think we will continue to see more growth in the use of machine learning algorithms, but we haven't yet. Execution is an interesting topic, and you'll hear Christina in a moment I think get into this in much more detail. With execution algorithms, you're talking about interacting with the market at a microstructure level. That's the only thing we have that's like big data in trading. The order of the U.S. order book. It's kind of the gold standard of large data sets as far as our world goes. It's laughably small compared to what the people at Google are dealing with but it's still—it's our biggest data set.

The point is that even though these is this dynamical effect that happens by interacting with markets, there is enough data being generated on a short enough time scale that you can actually cross-validate. It's also interesting in the sense like I think almost of an analogy of quantum physics versus regular physics where regular physics is intuitive for at least smart people, not me. Quantum physics is not intuitive for almost anyone.

It's a little bit the same with execution algorithms. The kind of effects that happen at this sub-atomic level, the microstructure level, are so weird and hard to just make good theories about that in many cases you're better off just letting the data tell you what's going on. There's enough of it here. The issue in the past is that there's also a lot of sensitivity to timing to latency. Up until now, we haven't had machines fast enough, but at this point we do. I think you'll see more use there.

As for actual forecasting, which is the core driver of alpha, as I mentioned at the outset, the jury is still out on this topic. We do not see any evidence that machine learning funds are necessarily better than other funds, and it's going to be a long time before you can determine with any kind of statistical significance, even when there is someone under- or outperforming, how much of that is luck and how much of it's actually real. This is a very high-noise business as I mentioned.

I will say that in my estimation, the most likely way that we'll see a breakthrough in this area is by combining many weak forecasts into one stronger one. I think of an analogy to sensor networks, where you have many low signal-to-noise inputs telling you something. When they agree, perhaps there's something interesting there that maybe more where we see a breakthrough.

In conclusion, [machine learning] techniques are already useful in various functions in the broader framework of what you do as an investment manager as a tool for generating alpha. I think the jury is out. This is an unproven claim that it's useful there. There's a lot of success in the integration of unstructured data sets and also in the combination of signals or in signal mixing, as we've been calling it. There's a lot of jargon and a lot of charlatans in this space. I cannot tell you the number of times I come across traders claiming to do machine learning, and they're literally just doing multiple regression. I mean, okay, as we heard that's mostly what this stuff is, but there should be a little bit of fanciness to it.

Finally, I think there is no substitute for experience. There are a lot of techniques. There are new things coming out all the time. Someone who understands applied science well enough to know when tools are applicable is irreplaceable at this point. I think that's all I have.

Spatt: Thank you. Our next speaker will Christina Qi, who's a partner at Domeyard.

Christina Qi: Thanks, Chester. I just wanted to give a brief intro to myself so you can have some context as to where I'm speaking from. I'm a practitioner in this space. I started Domeyard which is a hedge fund that's focused on high-frequency trading [HFT]. I started about six years ago now. In terms of using quantitative techniques, machine learning, artificial intelligence, things like that, we do use it to an extent. I'm here to speak more not from the perspective of a regulator but from the perspective of a practitioner. I think Chester had a couple of questions as well. I don't know if you wanted to start by asking or I can also give a high-level overview.

Spatt: Sure. What do you think are some of the most fruitful applications of machine learning in your space? How do high frequency firms try to use it?

Qi: That's a really good question. I think actually Rishi was right on the bat in terms of where it's used and where it's not. I think one of the biggest misconceptions in high-frequency trading is that all we're doing is using artificial intelligence machine learning in doing these really cool strategies that are really complex. Then we can just run the strategy and then come to Florida and go on vacation [laughter]. Unfortunately, that's not the case at all. Basically, there's a couple of misconceptions there. I think one is that a lot of the machine learning tools, the techniques that we currently know about that exist today, because they're so incredibly complex. Actually what complexity means by definition is that they're slower, and as a high=frequency trading company, we rely on speed in order to succeed. Basically, we optimize for latency and not for throughput which is what traditional machine learning does. That's a huge misconception there.

We don't use machine learning in the alpha generation portion of our company and the execution of it actually, in terms of the actual execution of a strategy. What we can use it for are things like, let's say if we have unstructured data sets. Someone is parsing news feeds or something like that and they need to figure out the sentiment analysis of what these news feeds are in terms of what's happening during the day. Yeah, sure, that would be a really good example of it. But in terms of the actual trading itself, that's an area that we actually don't use machine learning for.

The actual strategies themselves, you would be surprised they're actually quite simple in structure. They're not as complicated or fancy, I guess, as the things that Rishi was talking about. But I completely agree as we could use it towards things like also strategy selection but not in terms of the actual strategy itself.

Spatt: It's interesting. As an academic, I focused a bit upon equity market trading issues. You don't use this to decide which platforms or which exchanges you're going to send your particular orders to or how you're going to divide up the orders? You don't use machine learning for that?

Qi: Yes. Anything that requires speed in general, I would say like anything that requires us to be where latency matters to us, a scenario where we would actually avoid using machine learning whereas anything that is more behind the scenes. If it comes down to maybe looking at different data sources or maybe selecting different types of strategies that we might be using in the middle of, towards the middle or beginning of the day or something like that maybe that might be useful. I think besides that, in terms of the actual execution itself, that's a scenario that we don't ... I would say, not just us but other high-frequency trading firms as well. We don't traditionally use that.

Spatt: What do you see is your firm's edge? What gives you competitive advantage, so to speak?

Qi: That's a really good question, too. We get that question from all the investors that come into our office. Basically, I'll give you a story for some context here. Five years ago, we had this investor coming into our office. He asked me, "Well, what makes you different than all the other firms out there? All the Citadels and Virtus and all the big high-frequency shops out there?"

Five years ago, I would tell him, I'd say, to be honest the answer back then was speed. I could say a one-word answer, speed. It would be totally believable because back then we were one of the fastest players out there. I think looking at data, maybe at the nanosecond resolution, something really fast, latency is we're at the maybe single-digit microsecond level. Back then that was considered really fast.

Today, that same investor, if he came into our office and asked the same question—what's your advantage?—if I said speed, he'd just laugh in my face and be like, "Hahaha!" The reason why is because today I feel like the industry has matured to such an extent that speed is the bare minimum requirement to be even be able to compete in the space. On top of speed, we need to have other things. We need to have robust technologies, good risk management systems. You need to have a good culture even. Teams, people who want to stay with you and they don't want to just branch out and start another Tiger Cub or whatever. You want the people who, the best talent, to work with you forever.

You have to be good at all kinds of different things. For us specifically as well, we try to be faster in terms of let's get from a research side. Let's get from initial idea all the way to the execution of a strategy as fast as we can. We're not here to run a research lab like some other companies do. We want to make this a process that can take maybe three or four hours, even, if we are really good at what we're doing, or at least try to minimize the time it takes to generate a new strategy and to generate some new ideas.

I think, to us that's something that's very valuable because it gives us more opportunities to fail, more opportunities to try different things out. To be honest, it's a combination of everything today. You don't have to be the best at everything. Finance isn't a winner-take-all industry. Yesterday, I think it was John who mentioned during his keynote last night about Facebook. Facebook, that's a winner-take-all industry because in the United States there's Facebook, in China there's WeChat, in Japan there's Line, Korea there's KakaoTalk. Every country has its own one social media platform that everyone uses. Finance by design is designed so that there can be different players out there and there can be smaller players as well that compete against the large guys. The key for us also is to focus on areas that we know that a Goldman Sachs would probably avoid just because they're trading so much and it's just not worth it for them to go into our space. There's various areas we have to think about constantly in terms of how to stay on top of things, but it's not just speed, for instance, today.

Spatt: What do you see as the future of finance from your vantage point?

Qi: That's a good question as well. It's interesting because the financial industry has changed a lot in the past three, four, or five years even. I'll give some examples of this. Today, a lot of you guys have kids, you're parents with kids. If your kids want to go into finance, usually I would advise telling them, today if you want to go into finance, don't study finance in college. Study computer science or study math, statistics, some kind of applied math, because the things that—I studied finance in college and that was not too long ago, but even back then the things we were learning—I don't use any of that on my job today.

It's completely, the textbook stuff is good to know, but at the same time do I actually apply that to my role? Not so much. I've interned on discretionary trading desks before. Even the most discretionary trading desk out there today, they don't even use those techniques that we learned in the traditional financial classroom. Surprisingly, I've talked to some of them today and they're using some machine learning even on a discretionary trading floor, which surprised me. They're not the ones programming it or anything like that, but they're using some platforms out there where you can just plug in some parameters and they'll use machine learning behind the scenes to generate ideas or certain things that they might want to trade. I thought that was really fascinating. That's one of the bigger changes I think we've seen.

Also today, for instance, you've probably heard of people who want to, people who are starting businesses in finance. Usually if I hear someone saying I'm starting a financial firm, I just assume it's fintech because finance is becoming pretty synonymous with fintech. Also, if someone is starting—almost every day, actually people come up to me saying, "You know I want to start a hedge fund, how do you do it?" Like I assume it's going to be crypto. If it's not a crypto fund, then it's going to be a quant. If it's not quant, I'm like, "Well, what are you doing?" That's just how common it is.

I don't want to give the wrong estimate, but I'm assuming there's over 200–250 crypto funds today that exist already. By end of this year, it's going to go exponential. The number of people who come up to me every day saying they want to start a crypto fund is enormous. From some academics or very experienced all the way down to people in a college dorm room who are starting it, too. Everybody everywhere is doing this stuff. That's the future. I wouldn't be surprised to see this conference in the next four or five years—there's going to be a crypto one. In the next four or five years, I would not be surprised at all.

Spatt: Thank you. Maybe I could turn it over to Andrei and hear some of his perspectives, and then we'll start to do more formalized questions.

Andrei Kirilenko: Thank you, Chester. I'd like to thank the organizers for having me here. Also to give a little bit of a background to where I'm speaking from. I led at Fintech Center at Imperial. Before that, I was a professor at MIT. Before that I was chief economist of the CFTC [Commodity Futures Trading Commission]. At the CFTC, I began building in 2008 a series of modules that led to a prototype automated surveillance functionality. The purpose of it was, the logic of it, was that the markets have automated but the surveillance markets regulators remained at the very human level.

I started building prototypes and that led me to a very deep dive into algorithmic and high-frequency trading using audit trail data, data with lots of IDs that typically traders don't have, and they had it for the whole market. CFTC receives data from exchanges. As you know, futures exchanges are silos, so the entire market trades on one contract and one exchange like the mini as opposed to a very fragmented nature of the equity space. The nature of the exercise was in some way easier because of the data availability.

We built this functionality, my colleagues and I. On May 6, 2010, the flash crash happened. Flash crash happened on a Thursday, on Friday we received the data, on Saturday I brought my team to the office and they ran through our prototype to see what we could find. That led to my leading the effort on the CFTC side on the flash crash and then doing an even deeper dive into understanding how modern markets work and what makes them vulnerable and understanding prototype and reverse-engineering algorithms, understanding the ecosystem of algorithms, then further designing prototype algorithms and testing them and going into all sorts of functionality.

What it exposed me to is an understanding at the very much wider level than a company—an individual company or individual researcher—would have about what's out there. In very broad terms, I got scared. I got scared about how the—what I thought was the price-discovery process that we have in our models is not what the price-discovery process is in modern automated markets.

When we think of a price-discovery process, that's very important that feeds into fundamental economics because we think that prices aggregate information. This information then feeds into allocational decisions that drive our economy forward, translates into investment, constraints into capital finance, and translates into decisions that are made every second of every day by hundreds of millions of people. It turns out that the level of price-discovery venues, price discovery process is not just affected by information and liquidity, which what we would typically think it is, which is good. You think that information should be aggregated into prices, someone will know if something gets into prices. I think that liquidity should be aggregated into prices. Someone wants to buy more, the price should change. Someone wants to sell more, the price would change. That makes sense.

What if I were to tell you that the individual price-discovery level, the price for the most part—well, let's take it to the extreme before I tell you that. That theory would work as going back to quantum versus classical mechanics, at the level of classical where objects are basically affected by gravitational forces. When you move to a quantum level, all of a sudden you see that gravitational forces become so weak they become invisible, yet we see very interesting patterns.

Christina just told you that her decisions are being made at nanosecond level and execution is done in microsecond level. What changed? What information and liquidity changed at the microsecond level? Our theory breaks down. Information hasn't moved, and liquidity probably hasn't moved either. Yet, there was an action to act upon something. What is that something? It boils down to randomness that is actually embedded inside automated trading systems.

You can think of automated trading systems as giant computers with fiber optic cables around them that move around packets of data. These packets of data as they move parallel optic cables actually move at different speeds because of the different switch functionality. That technologically introduces randomness into when these packets arrive into the machine engine and when they end up in the order book. Technologically, has nothing to do with information, nothing to do with liquidity, it just was the way technology is laid out. That's why it affects our price discovery process. Is that what we want? Do we want our price discovery process to be affected by the technological configuration of our machine engines, of the technology that's in there? If you want to be operational at this level, there's money to be made on this, so there's a lot of money spent on figuring out how this randomness works. It's a process that—it's latency process. That's a process associated with delay of packets of data. These process are well known among computer scientists. They're a nonnormal. They look like power law distributions where you have a large tail, sometimes something would happen. Power law distributions are not closed under sampling. They do not have asymptotic properties. Low large numbers doesn't apply, central limit theorem doesn't apply. Statistics becomes here at different at this level. Machine learning becomes very challenging at this level. You can use it, but it's challenging.

This is what I discovered. I started looking more into latency and how it translates into actionable decisions because if latency is out there, there's latency in the speed of light between sun and earth. It travels at a certain latency, but it doesn't affect our everyday lives. If actionable decisions are made by algorithms based on latency associated with the technological complexity of trading venues, then it potentially becomes problematic, and it potentially could lead to situations like flash crashes.

Flash crashes, as you know from the report that my colleagues and I worked on, did not start as a latency play. It started out as some fairly slow-moving—actually very slow-moving—automated execution algorithm. It started executing a sizable, very sizable, trade on the CME Globex. What it led to is various delays, various associated—with what algorithms focus on and translated into actionable executions, into actions that algorithms have taken especially cross-market arbitrage algorithms, the ones that arbitrage that [inaudible] the mini [S&P mini] and the SPDR [Standard & Poor's Depositary Receipts], which translated that execution into actions across the U.S. marketplace and actually spilled over into other marketplaces, which led to a systemic event associated with—yes, it was a liquidity event, but it basically didn't have to happen.

What stopped the flash crash is an algorithm embedded inside a machine engine of the CME Globex, which shows a hack. There is many, many different hacks built around that. This is a so-called pretrade functionality that prevented basically trades from happening. It inserted a five-second pause into it, which interestingly enough basically broke down the latency pattern and reset the clocks for multiple algorithms. It reset the clocks in a way that now what went down started going up. Now, algorithms have interpreted that the prices have reached the bottom and they automatically decided to start buying the mini and then translated it into buying the SPDR. Then they have a bounce back. All of that happened because of this technological complexity of these markets which was tripped by someone who did something very large and very stupid.

Things like that happened again. There was a Treasury flash crash that rattled a lot of people and led to a similar investigation. Treasury flash crash report came back much, much less conclusive. It highlighted the latency issues again, highlighted technological complexity, but it was difficult to decipher because of the complexity of the data, what actually happened there and how the transmission protocol worked.

The pound flash crash happened. Again, you have issues associated with technological complexity. I teach an algorithmic trading course to my students. I design a course, it's Python-based where students go in several days from knowing very little to actually pitching an idea for their quant fund, fully implemented on the data. That's how low barriers of entry are now. If you have an idea, you can fully implement it within five days.

I give them quizzes and finals. One of the questions on the final that I have is actually I modify something in the code, and I ask them to back test what I just, the modification that I did, and interpret it for me. Sometimes what I would do is that I would change a sign. In 500 lines of code, I would change one sign, and it would go from a data neutral long-short U.S. equity algo to a smart data long-only algo, which is 0.9 correlated with the U.S. market, one sign. If I were mean, what I could do, I could ask them to—you allocate for different objects that you create. In your algo, you need to allocate memory space. You can allocate memory space to a specific target. You can allocate memory space to an integer, you can allocate memory space to a natural number, you can allocate memory space to a complex number. If I were mean, I could actually ask them to allocate memory space to a complex number instead of natural number, and their whole algorithm will break down. That's how deep this thing is.

Bugs inside these algorithms could exist at levels that actually could go through backtesting and you wouldn't even know what hit you until there's something happened in the markets and your algorithms will go sideways very, very quickly. You would go try to debug it and it will really take a really serious computer scientist to figure that out. There are significant possibilities of this to happen. All of the exchanges, all trading venues, before they let your algo go live, run it through a series of stress tests. No algo would go allowed to go out there in the world without being stressed-tested by a trading venue. Why? Because they don't want your algo in their ecosystem if it has stuff in it that they don't want. They are very protective of it. The stress test will stress-test for historical scenarios. They may not stress test you for particular things. They'll put pretrade functionalities on you. Every single algorithm, every single algorithm could be throttled, every single one of them. "Throttled" meaning it could be brought down in terms of messages, communication with the trading platforms to levels that are either agreed upon or are in the rule book.

When we talk about circuit breakers and all that, circuit breakers is so yesterday. Circuit breakers operate at the level of executions after prices has moved. That's not what you want in the environment when you have algos in which you make decisions like that. A lot of the functionalities move very much pretrade and so before trade happens you want things to be throttled or not allow execution before trade happens, before price discovery occurs because once that's in the price discovery for whatever reason, things start moving, before it's even in the order book, before it's turned into an order. That's how complex things have become.

The answer to that interestingly are embedded in this complexity. Things that are more complex could also be—you would actually much more surgical about how we view this market, but of course it requires a very highly specialized skill, some which is scarce. Are you scared now? You should be.

Spatt: Thank you. I'll ask some questions and we'll also use many of the questions from the audience. Let me begin actually with a question probably most suitable for Andrei. It relates to directly to your remarks. How do you see the tie between machine learning and the issues of detecting the causes of the flash crash? Obviously, your analyses were done a number of years ago. Would machine learning enhance the ability to detect the causes, do you think?

Kirilenko: I agree with Rishi. The applications of machine learning, I've seen are primarily unparsing the data, just primarily unparsing the data. If machine learning could actually be used and actually agreed upon with trading venues that with machine learning protocols they use them in particular way and especially regarding those that related to latency anomalies, jitter anomalies, so that they don't become actionable too quickly.

If there are jitter anomalies, all of a sudden jitter is variability of latency. Latency varies for technological reasons. It's not a constant. Think of latency as volatility. It's a strictly positive variable that has jumps that are actually highly clustered and from time to time, it jumps up a lot. If it's within normal bounds, the algos will basically say, "Look, in my communication with exchanges and in latency going through the market feed, things are within the normal, within baseline." But if latency is detected as it's moving outside of normal bounds, algorithms will be told to do something. Sometimes told to do something would be, "Cancel everything I have, cancel all open orders, and also completely neutralize my position. Basically, if I'm long, sell everything, if I'm short, buy everything so I'm neutral." That becomes disruptive, and it becomes very aggressive at this point. I think that there could be a way where machine learning could be used to coordinate efforts that otherwise in this coordinated fashion could become disruptive. That's what I think. But maybe other panelists would ... .

Qi: Totally agree. There's nothing else. By the way, Andrei, I know you had mentioned, when you mentioned your classroom of students learning Python and using the platform to trade and mentioned the various entry, I think that reminded me of when we had interns from MIT who—we assign them this project of basically detecting anomalies in the marketplace. Can they find instances of spoofing and spamming and other forms of market manipulation. Also, I was really surprised too because these students, they don't know anything about market, microstructure. They don't what a bid or an ask really was at the very beginning. They don't know anything about that, or what a fixed engine is or anything like that.

Within the course of a month, they were able to process over a terabyte of data on a single laptop basically, and not only did they detect current instances of market manipulation but also what surprised us was they also were able to find potentially new instances of manipulation that haven't been detected yet. These are 20-year-old kids who are doing this. They came in with almost no knowledge of the markets. It just surprised me because the barrier to entry, even though it seems really high, it really is—if you just put some effort into it, it really isn't as scary as it might seem at first.

Then I know that Matt, in his previous presentation before us, he had mentioned GitHub, which is something that we use, and so I just want to talk a little bit more about that so that you guys have the perspective of someone who actually uses it. Thanks to GitHub, one of our interns had to go take a trip to Tokyo during that summer. We were like, "Okay, well, you can go there and thankfully you have a laptop and you have GitHub, which is a central repository of the company." Basically, for those of you who aren't familiar. There's GitHub, there's actually I think Mercurial, there's a couple other different companies that do this. Basically, it's a central repository of code for the entire company, where if you work on code from your laptop somewhere else, you can work on the same project pretty much simultaneously as someone else around the world, and they'll just all put their work all their changes simultaneously into this repo.

What it does for us is a couple of things. One is obviously it lets you work from anywhere around the world and contribute your code. Also, one example is we do something called R&D tax credits. Right now it's tax season and so we were in the midst of that not too long ago. When we're doing the tax credit study, they have to come in to our office and basically figure out who is working on what and also how much what are they working on what percentages are they contributing to the R&D process, right? I'll get back to the point, but basically these people who are coming in and doing the study, they know nothing about high frequency trading. They know nothing about quant finance or machine learning or anything.

All we did is we sent them, here are the gitlogs of...because when you're pushing something into the repository that we have, it gives you a timestamp of what time you submitted something, how many lines of code did you change and what did you change exactly. You can get as much into detail as little as you can but it doesn't—you don't have to reveal to them exactly what you worked or the details of the code itself. We gave them those logs of all the files and who worked on what and they were able to from those files, they were able to determine who was responsible for what part of the code that we're working on.

You can imagine if we handed that over to a regulator, we wouldn't be—I don't think we'd be too afraid at least. I'm not a compliance person but in terms of handing it over to a regulator to see who worked on what parts of the code. That's extremely useful. That's one of the uses that having a central repository provides. Just wanted to provide those two examples as things that we actually are using and how useful it could possibly be in the future.

Narang: It's okay. I want to take a step back and talk about flash crashes and also about price discovery. I have a maybe slightly different—I'm not scared about them, so Andrei's talk, while totally valid, doesn't frighten in the least. The biggest reasons are the following.

First, flash crashes are not new and the utility function of market makers is not new, which is to say that if I meant to provide you liquidity and this is not an altruistic service, I'm doing that for profit, and I found that en masse you guys are about to stampede me with sell orders, do you think I'm not going to move the price aggressively? It's quadratic, right?

Flash crashes aren't new. They happened in the '20s, they happened in the '30s, they happened throughout human history. There are panics, and these things are not that big of a deal to me. I don't think volatility at this level is that troublesome. By the way, what's happened to the U.S. stock markets since the 2010 flash crash? Nothing really horrible, right? Volatility sort of reached record lows afterward.

Anyway, the second is that—and it's for reasons that are not necessarily useful as Andrei correctly pointed out—that their complexities and interconnectedness of the market systems across asset classes or instrument classes more accurately and even within instrument classes across venues. The fact is still that the price discovery that's happening is real. Given what information a given market maker has at that moment, the behavior is actually pretty rational.

Can we stabilize the system further by undoing some of the unintended consequences of things like REGNMS [Regulation National Market System]? Yes, we can, we should be exploring those things. Should we take out inefficiencies introduced by, for example, having a very latent CIP [critical infrastructure protection], which is the central market price feed, that canonical market prize feed of current best bid and offer. Yeah, we should definitely fix the problems, but the problems aren't that problematic, I guess is the bigger picture point.

I would also point out that we looked at some data on volatility. High-frequency trading and algorithmic trading on this microsecond time scale is happening mostly during the trading day. I don't think that's too controversial a statement, which is to say that if you can compare what happens from close to open versus what happens from open to close, you can normalize that by level of outright volatility from close to close. You find that actually the intraday volatility, which if there was a problem in high-frequency trading you would see that figure go up, has not gone up over time. As high-frequency trading has proliferated it's gone down. I don't have the slides with me because I wasn't really expecting to get into HFT today, but it's gone down. I'd say that the system, while it has these weird long tails that are very real, A, I don't think they're that consequential, and B, they're less frequent than before.

Qi: The other thing is just, speaking after Rishi about these flash crashes, if you look at like the Black Monday flash crash in the 1980s, people would argue that it took years for the market to recover actually back then. Some people who I talk with so many professors, they would say it took 10 years to actually fully recover from that crash. Today, look at these smaller flash crashes or the 2010 one that was pretty big, you look at these recent flash crashes and it takes like what—10 seconds to recover? That's because of the algorithmic trading and because people are much faster today and because a lot of people, even discretion traders are using machines to an extent to do their trading that the recovery period is, you snap your fingers and everything is pretty much recovered, and you don't really feel the long-term impact of it like we used to.

Kirilenko: Maybe I could add also—Rishi, you're not scared because you're not a regulator. If you're a regulator and you have to go testify to the U.S. Congress two days later, you'd be very scared.

Narang: Fair enough.

Kirilenko: You go to your staff and you ask them, "What just happened?" They'll say, "We don't know."

Spatt: That's a good segue into ... maybe I can take the conversation on a little different direction. That's a good segue into a broad question. We heard yesterday about some of the ways in which the SEC uses machine learning. I'd like to ask the panelists, do they have any advice for the SEC as to how it should, for its regulatory purposes? Christina earlier hinted at some aspects of this. How it should use machine learning for regulation?

Qi: Yeah. If we could have interns who could look at anomalies in the marketplace already and they don't have that much experience to being with, I'm assuming the SEC with its greater resources than we have could potentially, I guess, utilize certain aspects of machine learning to detect these anomalies before a small startup in Boston detects it.

Narang: I think it's related to Andrei's answer to this question in the first place, which is that, I'm not ... what I know is that the SEC is making great strides towards using nonlawyers, that's actually what they call them internally. I've also heard this group called the league of extraordinary gentlemen which is obviously sexist because there's some pretty badass chicks in its department, but the market as you've seen it is crazy smart now. All kinds of computer scientists and HFT guys, they're using tools built by HFTs to detect anomalies, detect problematic behaviors. They're really kind of under Gregg Berman's leadership, really took some huge strides forward. That is not the same thing as using machine learning to understand the problems. I think that's a much more challenging task than using technology and nontraditional methods of enforcement or detection.

Kirilenko: I have some knowledge since I talk to my colleagues at the SEC. I am extremely complimentary of the efforts that they've taken because when we were working with the colleagues at the SEC on the flash crash, they didn't have the data and they didn't have analytics. It was Gregg Berman being there and spearheaded this process forward. Now it's a consolidated audit trail, they have the data, and now they have the analytics team that is getting there. It's hard to get quality people. It's very hard to keep them. It's hard to create an environment, but it made a massive effort. The stuff that we heard about yesterday, that's been out there for a while. It's impressive stuff.

What it does to market participants, it puts a disciplining ... it puts the discipline into the markets. It potentially prevents, in my opinion, some behavior from happening because if you know that the regulators have the capacity to detect, you may not engage in some things unless you're just sort of fly by night and you think they're not going to catch it. If you really want to do it as a profitable strategy, and you want to attract funds from investors, and you want to say this is a strategy, and these guys will never be able to catch me, they'll say, "Look, they're writing papers using machine learning tools on tick-level data or message-level data. What are you? Why do you think they're not going to catch you? They have the tools, they have the functionality, they could parse the data, they understand the data structure, they have the modules, they're looking at it, forget about it, we're not doing this." I think it's positive for the markets for the regulators to send signals like that. Machine learning or not machine learning, it's just a fact that somebody has this functionality puts a lot of discipline into the marketplace.

Spatt: One question which many asked but unfortunately has disappeared from my screen is, actually, I said many voted for actually was far and away the popular winner—it had nine votes, was—if you could just, and this relates to Andrei's introductory remarks, if just flipping a sign can lead to these problems, how vulnerable is our system to hacking?

Kirilenko: Very. Very, very vulnerable. The sign example was an example of a particular algorithm. A single algorithm that is constrained in many other ways going out there and it's probably going to get throttled, it's probably not going to get very far. By the way, throttling is something that's not just exchanges doing on their own. Actually, market participants want their algos to be throttled just in case that there is a bug so it doesn't run them out of capital in like 30 microseconds. Exchanges do remain to where trading platforms remain to be highly centralized, extremely, extremely sensitive pieces of code and data. They are vulnerable. What can you do about it? They built, it becomes expensive if you really want to do something. They built clones, it's very hard to get it down because physical security and cybersecurity is very hard. The way we deal with that in a very regulated way is that you cannot go directly and trade in an exchange. You need to have a broker/dealer. You need to have some intermediary who's going to intermediate for you.

Even if you are direct market access like Christina, she needs to have a broker/dealer before she can do anything. Broker/dealers potentially are also one of the sources of vulnerability. If there is a way to get to an exchange through a broker/dealer and it could be done in a way that could cause harm, then it could be a potentially a major issue. Central banks in particular come in if some of these broker/dealer or future commission merchants are also running operations that are related to your mandate. That's one of the functionalities that's definitely worth looking at.

Narang: I have the good fortune of working informally with DARPA [Defense Advanced Research Projects Agency] on a project that started off being called FINWAR, and it's now called the Financial Vulnerabilities Project. It's a seedling still, I think, it may or may not end up...I think it probably will end up becoming an actual program at DARPA, which is a lot about this topic of hacking and vulnerabilities in the system that could be exploited by unfriendly parties.

It's actually fairly trivial to introduce flash crash type of scenarios. There was a spoofed news story a while back that got propagated about a bomb at the White House. This is a couple of years ago, which caused some very quick moves in the marketplace. The whole fake news thing—and I don't mean that in the current weird political climate—I mean in the actual sense of manufacturing news for a purpose of manipulating the market, that's a very real possibility. It's happened already. It will probably continue to happen, almost impossible to guard against. The question is more what you do with it, and so circuit breakers and throttling and other kinds of techniques help there.

What was interesting in the brainstorming, besides the fact that we had this room full of HFTs and quants coming up with all kinds of disastrous scenarios, was it seems very hard to create a systemic problem. What you can do is maybe death by a thousand cuts of reducing confidence in the secondary markets but actually inducing long-term pain into the secondary markets in all of the analyses and all of the discussions and thought experiments we did. It seems actually pretty hard. I think to hack the system enough to create short-term volatility is very easy. To hack the system sufficiently to create long-term problem's much harder.

Qi: Yeah. I was going to say the exact same thing. There's a difference between a small scale, one-person player in the market trying to manipulate something to get a better price or to take advantage of some opportunity versus, and that person will probably get caught and fined and kicked out of whatever organizations they're a part of. That could happen. There's a huge difference between that versus like in cryptocurrency right where there's this huge heists, these big hacks that are actually happening with these crypto-exchanges where they're losing hundreds of millions of dollars all in a second and then they just go bankrupt. That's different. That's not going to happen to any of these current exchanges that we're trading or these guys are doing anytime soon. I guess there's a huge difference on that.

Narang: We've also seen that they're perfectly willing to use a time machine and say, "Right, those trades didn't happen." They did, but the hell with it, they didn't after the fact. There's that, too, which is kind of a nuclear option in some sense.

Spatt: Personally, I think that's a big problem.

Narang: Theyuse it but it's—they do it, but...

Spatt: I'm aware that they did it. I thought that the SEC's handling of the flash crash and creating a band of trades that were allowed and then of about plus/minus 60 percent and then outside the band they were simply disallowed. I think that was disastrous ...

Narang: Highly arbitrary.

Spatt: ...for markets because it was completely arbitrary, and it creates now loss of confidence in willingness to supply liquidity and also discourages parties from being careful about bearing the consequences of their actions. That actually ties in a little bit to a question that I put up on the screen a minute or two ago from the audience. The question is the following: Given the power of artificial intelligence and machine learning techniques, should we be worried about necessarily limiting these crashes and bailing out the parties? For example, as arguably happened in a piece of the flash crash, the disallowing of these trades, should we just focus on having machine learning, having the data, and viewing failure as a very costly option?

Qi: I think with the evolution of technology, they're always going to be some risk associated with it, and over time we learn how to stop those risks eventually. The same goes for back then during the Industrial Revolution, when there were machines that broke and people were getting injured and dying and whatever else happened. You learn what those risks are and you learn to have safety, worker safety for everyone. It's the same thing today. I think it's this—evolution is unstoppable. It's going to happen regardless and the underlying technology is always going to be there. It's more about, at least, I'm not a regulator but just from our view, it's more about doing the best we can with an honest effort and then learning. Like for us, some when the mini-crash that happened on February 5, for us, that was one of our first learning experiences where we were able to go through that. We didn't contribute to it, of course, but it was something where—when we were trading we were learning a lot form that and we were able to take that data so that in the future when that happens again, we know how to do a better job.

Kirilenko: I'd like to emphasize this aspect of governance. I think the governance is something that has helped us a lot and develop financial markets. One of the reasons that crypto-exchanges are so vulnerable is that wallets are held at exchanges. There are no customer accounts actually held at national exchanges. You're not supposed to do that because then they become honeypots. They will obviously become targets of attacks. You want to hold your customer account with a broker/dealer, and then you have the exchanges and machine engine. That way, so the governance process, you separate the possibility of funds being concentrated in one place and therefore lower the possibility of an action that could be adverse. The governance process and this technology is very, very important. If you want to segregate things, you want to shred them, you want to encrypt them, you want to put them in places where they're hard to find but in the way the technologies they are now set up in ways where you can do it. You can still run your operations at speeds that you run in, but making the system much less vulnerable at actually in not that costly of a way.

Spatt: To the extent that an algorithm is using preprogrammed information, that seems so very different from using it to discover new strategies. To what extent is machine learning focused on the development of new strategies? Is it largely focused on that? Is it also involved in the implementation of strategies that have already been identified? In some sense, one sounds more like an expert system already developed.

Narang: To the extent that, as I indicated in my talk, to the extent that we are seeing some folks actually attempting to take the discretion out of the discovery-of-alpha process, meaning to automate the theorization, the theoretical component of the strategy of what should be done. Even there, it's pretty close to an expert system. Still, I haven't seen an instance where someone literally says let me throw all of the data I can get my hands on at a machine and have it figure out why things move, and thus forecast why it'll move in the future. There's still a high degree of curation of the data that you feed into the machine and a lot of other aspects as well, leaving alone which algorithms to use and so on.

As I said, I think we're mainly seeing machine learning used to process data on structured data sets in particular, and also in the mixing of signals and the kind of combination or selection of strategies or sizing of them to the extent there are multiple strategies. We're not seeing nearly as much use in the development of actual forecasts. That's a growing area, but again the jury is out as to whether that's a tractable problem for these techniques to solve.

Qi: We've come across some quantitative trading startups where their pitch is basically that the machines will do everything from signal generation all the way down to execution of just looking at new strategies themselves and things like that as well. Long story short, those startups failed and it did not work out. I have never seen a single company yet, maybe some larger firms that have the budget to do so. Out of the smaller firms that we talked to at least, we have yet to meet a company that has successfully used machine learning to generate new signals to trade in the marketplace. That's one thing we haven't quite gotten there yet, I feel like maybe in the future that could be possible, but today at least in terms of—the alpha generation portion of it, I don't think we've seen anyone succeed in that. I think the same thing goes for, even if you take one step back, you look at. Today, if you go to any of these financial expositions that hedge funds go to to look for service providers, you're going to get—if you ever want to feel popular, by the way, you go to these expos and people will line up to talk to me.

These service providers are saying, "Oh, we have this new data source that we're going to sell." Sometimes they look at satellite data now with the oil rigs and things like that. It's pretty fascinating stuff. Then if you look at the number of trading firms that actually take advantage of it and can actually generate profits from it, it's actually not that many, at least from what we've seen out there. You don't really—we're not quite there yet.

Spatt: Can I ask a follow up to that? When you say it's not so much, is that because the information in that data is not so valuable? Or because so many people are subscribing to it that it eats up the rents? Maybe another way to say this is, to what degree does competition basically soak up the opportunities for profit because of all the expenses associated with developing these novel strategies? Whether it's looking at the oil rigs, or whether we're having drones fly over the malls' parking lots, or these types of data?

Narang: It's a super-interesting question because there's definitely a trade-off between how proprietary a piece of data is. Imagine that some deity came down and told you that Apple's going to miss its earnings two quarters from now. Is that information that useful today unless you're trading options? Probably not actually, because between now and then you can go bankrupt shorting Apple. What you really need to know is what's going to happen in this very proximate future, somewhere between milliseconds and months.

If you get some piece of data that's really clever about forecasting very far into the future with great accuracy, but you're the only one who has that information doesn't matter because the scale and timing that we talked about for alpha is only useful, it really only works if what you're forecasting is actually the aggregate behavior of other people in the proximate future, in the relevant proximate future. What you really need to know is what other people are going to do before they do it. Probabilistically—most likely unless you're going to outright break the law and front-run—but probabilistically, what is the massive humanity going to do about this stock or this futures contract or whatever.

There's this tradeoff where if you have this really cool data set that's totally unique to you, no one else has it, maybe no one else has it and they're not going to care about this information in any relevant time frame. On the other hand, to your point, if everyone has it, like price and volume data now, it's very hard to extract alpha today from peer price and volume data. This is challenging as a buyer of data, it's also challenging as a vendor of data because, for example, I'm also lucky enough to be an adviser to a company called Planet, which does daily images of the entire land mass of the planet, which is a pretty cool thing and a very interesting data problem.

When they were looking at selling this data set into the hedge fund landscape, they really had this—they still are wrestling, as far as I know—with the business model. Do you sell this for $10 million to five companies? Do you sell it for $50 a month to everyone? There's a definite tipping point where after which it becomes commoditized.

Qi: Yeah. One common question I ask a lot of these data vendors that, for the drones or for the satellite imageries, especially if they're startup like a company of maybe five people, I'll ask them why don't you just use that data for yourself? If you have really valuable data, and if you just have even a 20-year-old intern with a little bit of knowledge of the markets, you could be able to trade that and actually make a profit if it was valuable. The answers I get from them they vary from, "We don't know how it's profitable. We don't actually know. We just have the access to the data but we don't actually know how to apply it to the markets. We don't really know what value it brings." That's a common one.

There's other data providers who will say things like, "Well, we can make more money by, maybe a food partner with Palantir or a Google or just one big giant company out there, Fidelity or someone, just have that one provider that provides us the income throughout the year, we actually make more money that way than if we kept the data to ourselves. I think that's an interesting question to wrestle with on the data provider side.

Kirilenko: The economics, Chester, is fairly straightforward, that there is valuable data out there and should be incorporated into the prices, and these are the mechanisms to incorporate it into prices. That's good. There's nothing potentially illegal or immoral about it, that's fine. Should the enterprises, if they find that it's not, the signal is too weak and it shouldn't be into price and doesn't make it into prices yet, it might eventually, when the processing capacity become in some way stronger. I think a question that is somewhat unanswered is—let's run a simple exercise. Let's say you have three factors that you extracted from the data. You ran the analytics, you ingested the data, you ran the analytics and you decided that these three factors are going to outline, are going to determine how you're going to trade. These three factors—you have a spawn, let's say, U.S. equity into long and short positions based on these three factors. If the factor goes positive, these three factors in combination go positive, for some stocks you go long, if they go negative you go short. You dynamically rebalance, let's say, every minute.

What you start looking at then is correlations between these assets. Let's say you constructed a variance-covariance matrix where they correlate these assets and you put them into, and you take a snapshot of it every minute. That's a variance-covariance matrix. Positive and definite all the functions. Let's say another minute, you take another snapshot of it and you layer it on top of it, and then another minute you layer it on top of it. What you end up with was something called a tenser, what you called here. It's a three-dimensional matrix now. Even this three-dimensional matrix, it's going to be fairly stable because these correlations are probably not going to change that much. Even it was in that stable object, if you're actually going to run a structural decomposition on it, you're going to end up with something that that has no known distribution.

Work with it now. What are you going to do with it? You're going to extract something and you can try to go predictive. Even in this very, very simple case, applying machine learning to that becomes difficult right from the start. Let's say you've gone through all this process, you've extracted some weak signal and you say from the combination of this, "Okay, this is actionable." You go to execution, but let's say your execution is not optimized to actually act upon this very, very weak alpha that you're seeing as persistent, it's there. You don't know why it's there but you're seeing it there. It helps you reoptimize your portfolio. You go into execution and actually lose money because the execution is not optimized.

What's the point? Machine learning, the analytics of it becomes useless between the data, between your analysis, between execution and the risk management layer. It has to be so incredibly strong what you're getting out of machine learning that you don't lose it to these other frictions.

Spatt: I think maybe we have time for one final question. Some audience members asked, putting a weight between 1 and 10 on the relative promise and success of finding new big data sources and existing techniques, which do you—it cites into some of our recent discussion—which do we think is relatively more important? Having new data sets or doing better in more sophisticated analyses of the existing data?

Qi: I was just going to give some context to this. In terms of the amount of Big Data that's available in terms of this whole Big Data movement, I remember there's another startup in New York. They mentioned how like 90 percent of the data that's available today that we take that—it's relatively useful in the financial space—came about in the past two years or so. That gives you a sense of just how much financial data and how many different startups are coming out better utilizing these new sources of data. I think that today, unfortunately, we haven't quite gotten there in terms of taking advantage of all those different sources yet. There is a lot more promise in terms of looking at the existing sources of data hopefully and seeing what—not necessarily machine learning techniques, but maybe just seeing what kind of alpha signals we can generate by using technology on that data.

Kirilenko: I think it depends on how much money you want to move through this. If you want to move $1 to $3 million, there's a ton of stuff out there in the markets that nobody wants to touch because it's just not enough capacity to move through it. My students find it. If you want to move $1 to $3 million through a market, you can get yourself good returns, just no capacity at it disappears. If you're a Blackrock or you're large, you have no choice but to look for good data. You're ready in everything, you already allocated, and you have to find out the possibilities and you optimized your execution, where else do you look? If you're in between things become interesting. That's where the tradeoff really happens.

Qi: To give some more context to that as well, as a high-frequency trading company, I think about 99 percent of the data we receive is directly from the exchange. They don't come from these satellites or other companies that we're paying on the side. They come directly from CME [Chicago Mercantile Exchange], for instance. Just to also give you some context here, let's say even today on a regular morning in the New York Stock Exchange, we might receive something like, 350 million signals. That's a lot of data in the course of two hours or something like that. That's a lot of data to process already for us as a small company, and I could imagine for a large firm as well—it's already a lot. There's a lot of opportunities even within that on a microscopic level that other firms haven't quite taken advantage of yet.

Narang: I guess just to address this one directly, seven for new data and three for ML [machine learning] on existing data because as I said, taking commoditized data sets, you can be as fancy as you want, it's not that easy to extract blood from that stone anymore.

Spatt: Let me thank the panelists for very interesting comments and before we conclude the—let me thank the panelists. Before we conclude the conference, I'd like to ask Paula to come up to offer some brief closing remarks.

Paula Tkac: These will be very brief. On behalf of the organizing committee, I'd like to thank everyone who participated in the conference for the last two days including all of you, with your robust participation in all the conversations that we've had over dinner and in between sessions. The folks on the organizing committee, I'd like to ask you to stand up. I'm not sure that everyone knows who the folks are. In addition to myself, it's Larry Wall, Mark Jensen, Nikolay Gospodinov, Kris Gerardi, Bin Wei. We're all in the Research Department. Of course, Lisa got several shout-outs last night at dinner. We could not pull any of this off without our great partnership with Public Affairs including Lisa, and Jean, and Cassie, and David, and Jason, and all the folks who do all the AV. One more time.

For all of you, I just want to say, the organizing committee tries every year to come up with a program that you'll find interesting and that we find interesting. We had a lot of fun this year putting this together. I think we've learned a ton over the course of the year, and we learned two tons more while we've heard everyone talk over the last couple of days. We will start planning next year's conference tomorrow, so if you have ideas about interesting topics we can take on, please feel free to contact any of us and provide that input.

I do want everyone else to know that we're going to have content that we'll roll out over the course of the next, let's say, month or so, including full-length videos of all the presentations that are here. The PowerPoint presentations and other comments that the speakers have given permission to share are already up on our website. The app will remain active for as long as you like it in terms of connecting to other people that you may have met here, speakers included, and other attendees. With that, we are adjourned. Thank you so much for coming and have safe travels home.