-
Conference information
- Conference overview
- Agenda
May 19, 2025
Monday Evening Keynote: Michael Schwarz
Microsoft corporate vice president and chief economist Michael Schwarz closed day one of the 2025 Financial Markets Conference with a discussion about artificial intelligence and productivity. Atlanta Fed executive vice president and director of research Paula Tkac moderated a Q&A session following the address.
Transcript
Paula Tkac: On behalf of our whole team, and my boss down here, I welcome you back, after what I hope was a really engaging and restful afternoon, for our keynote this evening. So, it is my pleasure to introduce our keynote speaker, Michael Schwarz, corporate vice president and chief economist at Microsoft.
Michael began his career in academia as a faculty member in the econ department at Harvard University. And then, after what I've heard was a bit of a break to work on a project for Yahoo, Michael, as they say, never went back. So, he, in the meantime, has worked for major tech firms such as Yahoo, Waze, Google, and of course now Microsoft, bringing his economist tool kit to study some really thorny problems at the intersection of technology and business.
Tonight, Michael's going to share his insights on AI, bringing an insider's view of how AI can shape the future of financial markets, work—and, potentially, workers themselves—in the macroeconomy. So, with that short introduction, please join me in welcoming Michael Schwarz.
Michael Schwarz: Hello. Well, it's a pleasure and an honor to be talking here at this conference. I was thinking of sending my AI avatar to talk to you about the impact of AI on the economy and on the financial markets. And then I talked to my avatar that I don't have, and the avatar told me that since I don't have any writings about the impact of AI on financial markets, my avatar couldn't say anything—so, I'm here.
I'm not an expert on financial markets, but my team does a lot of work on measuring the impact of AI on productivity. AI hijacked my life several years ago when it burst on the scene. And before that, all my time was spent working on things like pricing, yield management, and things like that—business models—and suddenly, AI comes about and it's like, "Let's measure what it does."
So, we were spending a lot of time trying to measure the impact of AI on productivity, looking at telemetry data, running experiments, running field experiments, lab experiments, whatnot. And I can talk for hours about the impact of AI on productivity; there are lots of studies, great studies, produced by my team, there are lots of great studies produced by academics in different institutions. But we only have 45 minutes, and unless you live under a rock, you probably have seen the bottom line of those studies; and the bottom line is, AI makes a big difference for many, many tasks, and it's getting better and better, so more of that in the future.
But let me just spend maybe a couple of minutes on some of the studies on AI and productivity. So, first of all, you see an enormous range of impacts. Even when you look at the same industry, the same occupation, if you do different studies, you get different results. That's how science is done. So, for example, for software engineers, I think AI has probably the biggest impact of any other occupation—maybe not, but one of the biggest for sure.
And in the studies that just my team did, the range of impact ranges from low double digits to an over 50 percent increase in productivity. In different studies, you get different results; but it's always quite large for software engineers, it's really big. It's always more than 10 percent, in every study that we did.
So, I can go into the details of why the results are different. It's not measurement error; it's genuinely different on the different circumstances. But for other occupations, you see much smaller impacts. So, for example, we looked at the impact of Microsoft Copilot on knowledge workers—and again, there's huge heterogeneity. But, for example, if you look at one thing—how much time people spend reading email—you can actually see meaningful impact. We look at telemetry data, so we see how much time people spend before and after adopting Microsoft Copilot, and it's a 2 percent difference. It actually doesn't seem like much, but it's actually a lot. If you count the value of that time, that by itself probably could pay for the Copilot, and it's a tiny thing of what it does.
And then, why would you spend much time on email if you have Microsoft Copilot? So, I started like digging deeper and deeper into it, and the answer is, well, actually, it's all the "threads summarization" feature. You all experience those enormous threads, where you were away for like six hours and there's a thread of like 50 emails, and you go, "Summarize it for me. Is there anything there relevant for me? Delete." And you just read 20 fewer emails. Done, right?
So, there is huge heterogeneity there, so there are two things that I want to emphasize about AI. First, AI is not magic dust that makes everybody more productive; it's just not. Just like running shoes are not going to make you fit. They will not. And that analogy between AI and running shoes is much better than you might think it is. So, for example, early on when just Copilot first became commercially available, we had something called the early access program (and the papers in the public domain about it, of course). And we started with those first customers of Copilot, and we started to observe what happens with them.
And just like with running shoes, you have a company that procured Copilot for a certain number of employees. And then you look at the data, the telemetry data, on usage, and you see that a significant percentage of them—maybe like, in the first two weeks, they used it once. So, obviously, the impact on their productivity is zero; just like running shoes that you bought, with really good intentions, may or may not do wonders for you. So, it's the same story with AI: We humans are terrible at doing things that are good for us. That's true. And we're terrible at changing our habits. All of us have the ability to extend our lifespan by years, not months, and be healthier by simply running every day for 30 minutes. And how many people do not do that?
So, then it's like when your life depends on that, we still don't do it. So, when your employer's bottom line depends on you using AI, how easy is it going to be to get you to use it? Are you just automatically going to change your habits? No. And that's what we observe in the data. There's enormous heterogeneity across people—some are here, some are here. Then you look across companies, and again you see this enormous heterogeneity; in some companies, the leader—and you can say, "Different types of businesses, different proclivity."
Yes; maybe true, for sure. But leadership matters, and I think a lot depends on companies' leadership, on how good they are at encouraging people to use AI, creating good training programs. And most importantly, AI is not going to realize its full potential before it's incorporated in the business process. When you say, "Do the same thing; now you have this extra tool"—that's good. It helps, and some people really change their work life because of that; but really, being part of the process is the key.
So, the first observation related to big-picture financial markets that I'd like to make here is that, like every new technological revolution that came before, you should expect that AI would cause a certain reshuffling in pretty much every industry, just like the internet did. Those who started investing in learning how to do it, into changing the business processes, into trying different things—and, a lot of the time, wasting time, and a lot of the time spending a lot of resources on learning and figuring things out—those companies went up. They increased their market share, and the companies that failed to do that went the way of Blockbuster.
And I think that for AI, it's going to be at least as true as for almost any other technology that came before. So, I think that we should definitely expect change in the relative ranking of companies, and change in market shares; and a lot of that would depend on how smart those companies were right now, in terms of starting that learning process.
So, that's one observation. So, if I were a young financial analyst, I'd probably try to go around different companies in my industry, and I'd try to go to figure out who are the guys who are really doing a good job in starting to learn—because they are likely to be big winners.
So, that's one very simple, very basic insight. Okay; well, what else could I tell you about the potential impact of AI on financial markets? Well, I think that another thing that I expect we'll see—and that's actually already happening, I think—is that markets will start reacting a little bit faster to news, because now AI could be listening to it and interpreting them.
Of course, this is nothing really new. When automated trading became feasible, a bunch of people made really big fortunes by being the first ones to react to things in literally milliseconds. They spent money locating their computer right next to exchanges so that they can shave down another 20 milliseconds, and be before the other guy. And some of those guys got really wealthy.
But of course, on the aggregate for the entire economy, it's completely unimportant; it's a tiny fraction of 1 percent. Sure, it's billions for someone's fortune; but it's really not a game changer there. And I expect that with AI, some of that would be happening, too. So, some people will make some fortunes by basically reacting to news faster than others.
So, to me, as an economist who loves efficiency, this is actually neither here nor there. It doesn't help the real economy. It's just pretty much a prisoner's dilemma game, where someone picks up some tiny fraction of a percent through that; not a huge deal. In fact, in the ideal world, that socially unproductive use of technology wouldn't even happen. You can quite easily change the way markets operate, the way Eric Budish from Booth School of Business advocates—just trade, not in continuous time, but every 10 minutes; and suddenly, there's no incentive to shave 20 milliseconds.
Okay; that's also second order. It doesn't really matter if you do it this way or not, it's second order stuff. So, is there anything first order? I think there are some things that are first order and interesting, and I want to mention a few of those.
What will AI do for the real economy? So, one thing that, of course, it will do, it will make firms more productive. That's important, that's a first order thing, and that would make mankind wealthier—and it will make shareholders wealthier, sure. But that's really neither here nor there, in terms of financial markets. It's just technological progress, and financial markets will simply reflect whatever that progress is.
Is there anything interesting, specific to financial markets? And I'd like to point out one thing that feels interesting to me there. So, what do the financial markets do for the real economy? Why do you need them? And I think that the most important thing is probably risk sharing; that's the key, the allocation of capital. And if you can share risk better, that's first-order important; right? And, how do we share risk?
Well, financial markets make it possible for companies to be publicly traded, and that's the way for owners not to suffer enormous amounts of risk. If you are an owner of a company and you own 100 percent of it, you're going to be extremely risk-averse, and that's going to be very bad for the economy overall because you wouldn't be willing to take risks that make a lot of sense for society, but that don't make sense for you.
That's why the fact that we have markets where companies are publicly traded is wonderful, and it creates a lot of real value. I think AI has the potential to expand the set of businesses that could be publicly traded and could benefit from that risk sharing, and here's why. So, for a company to be publicly traded, well, it has to be followed by some analysts, and that's expensive. Nobody wants to invest in a company that's a complete black box.
So, I think with AI, it becomes cheaper to follow companies; it becomes cheaper to do audits, and so on and so forth. And that might lower the bar for the minimum size of a company that could plausibly be publicly traded, and I could see that having really first-order welfare effects, and it could change the way companies may operate because they may become a little bit more adventurous if owners could own just part of that.
And there are many businesses, many examples of companies, that currently are rarely publicly traded. For example, imagine a guy who owns 200 apartments; that's a very wealthy person. But it's a person who is very undiversified, and today there's not an easy way to diversify that. So, imagine if you can have technological solutions that would make it pretty easy to monitor those kinds of guys; so then, maybe the threshold for being publicly traded will go down. There's a real, meaningful benefit there.
And by the way, this is not unique to AI. Today, the share of the economy that lives in this publicly traded space is much bigger than a hundred years ago. Why? Well, for one reason, mankind grew up a little bit, and we developed institutions, and yada yada yada. But none of that would be possible without the IT revolution, and most of that wouldn't be possible without the telephone, and telegraph, and all those other things. So, the set of those publicly traded companies was extending as analytics and communication technology was improving, and AI is one more step in that direction that again lowers the bar for being publicly traded. That's a good thing.
Also, I'm fairly confident that AI will increase the share of the economy that's in the publicly traded domain. I'm actually not at all confident that AI will reduce the size of publicly traded companies. And you could say, "Well, Michael, aren't you contradicting yourself? Didn't you tell us how smaller guys would be able to go public?" True; they will be able to. The threshold will be lower, but something else would happen. It would also be easier for large companies to either acquire other smaller companies, or to get into the lines of business that they previously wouldn't dare to go into because they would say, "How are we going to monitor this thing?"
Well, AI makes it a little bit easier to coordinate and communicate all this information within an organization. So, there are those two countervailing factors. On one hand, it becomes easier for a little guy to go public. On the other hand, it becomes easier for a big guy to either purchase that little guy, or to just go to get into that business and turn that business into part of the business. So, it could go either way. I see those two countervailing forces. I don't know which way it will go, but I'm quite confident that the share of the economy that's publicly traded will increase—and that's a good thing.
So, I think I will stop here, and I think now is the time for questions.
Tkac: Okay; I'll remind everyone, if you have questions for Michael, put them in the app on Cvent, and we'll take them up here. I'm going to start with a question from the audience, and I may have a follow-up; we'll see where this goes. There's research literature that suggests it can take up to 40 years for a new transformative technology to fully influence the economy; we've all heard that. So, the question is: "Where do you think we are in that life cycle?" In some ways, you can think of that as: "When was AI born? How do we think of AI right now?" But what is that horizon like? Where do you think we are? How long until we see those transformative benefits?
Schwarz: Well, I don't have a crystal ball, so I don't know. I think that it's, in some ways, a little bit of the ill-defined question, because one way to answer is—well, actually, yesterday, if you go around this room and you talk to each person, the first thing you will see, you'll see enormous heterogeneity of AI adoption. You'll see some people who say, yes, I played around with it a few times, but it's neither here nor there for me. And you see some people who go, "Oh my God, I'm using it for this and for this and for this and for this, and it makes quite a difference for me."
And so, for example, if you talk to, say, graphic designers—some graphic design shops really completely transformed how they work with AI already; that's happened. So, you see meaningful impact, impact that matters. Could the impact be much, much bigger—even if AI were not to improve, just at the current level of AI, could the impact be enormously bigger than it already is? Sure, there's no question about it. It could be enormously larger.
So, how quickly we will see the full impact? Well, the full impact is almost a meaningless question. Imagine you would have asked me the same question in, say 1950: How long will it take for computers to fully impact the economy? It's like, circa 1950—75 years ago—what would you answer to this question? If you knew the future for the next 75 years, you would say, "Well, I don't know; it's clearly already happening, and happening in important ways—but there will be more in 10 years, and there will be a lot more in 20, and a lot more in 30, and so on and so forth."
So, I think with AI it's perhaps a similar journey.
Tkac: So, I'll interpret your answer a little bit as saying that it's really in the hands of businesses how fast this moves, because you talked about the heterogeneity. So, what are some things that you've seen from your work—and I know you've done a lot of research into where productivity can be enhanced. What are some of your thoughts about how, let's say, an individual business or an organization can effectively implement or speed up that horizon, at least within their organization?
Schwarz: Well, I think it obviously depends on the organization; but let me just give one example of a study that my team did on Copilot for Service. So, it's not a consumer product; it's a Copilot to be used by customer service agents and other support agents. And there is very robust evidence that it improves productivity; it takes less time to handle a ticket, all the metrics go up—which is not surprising. You have someone who does the first cut of the work for you.
So, in fact, in that study—and that was about a year ago, so by now, probably, it's significantly better. The world doesn't stay still. But in that study, we see low double-digit improvements on most metrics. And on some level, you can say that's pretty enormous; right? How often do you have a more than 10 percent increase in productivity?
On the other hand, actually, on some level, it's really small; it's just the beginning. If it's implemented and integrated and done really well, I could imagine the numbers being enormously higher. But that's one example where I think every sane organization should be starting to figure out how to do customer service better and cheaper.
Tkac: So, in some of your research at Microsoft—and I encourage everyone to go to the site and look at it; the papers are really fascinating—you've done, I think, some field experiments but also some testing of AI technology with actual workers, but in a very structured, what I would call "scientific" way. And so, I was hoping you could talk about that method of experimentation, and would that be something that you might suggest to firms as they try to figure out where AI could be productive inside their operation?
Schwarz: That's a great question. The honest answer is, I actually don't know, because I'm not an expert on implementing AI in a particular organization—and in fact, there's no such thing, right? The person who is an expert on implementing AI in a service organization would probably have years and years of background in the service business, and a person who is an expert on implementing AI in a design shop would have to come from a design background, and know how design shops operate. For software engineers, AI does wonders, but again...so, every case is, I think—every industry—is different there, I think.
But I think that, generally, we saw some customers doing those kinds of experiments, where we help customers to design those experiments, where basically they take a bunch of tasks and they do experiments with their employees. And I think that one of the unexpected benefits of those experiments, that I think the companies were not counting on, and were not maybe even interested in—they just wanted to see how much difference it makes.
But it's also kind of an interesting training for employees; when you're part of that experiment and you spend a few hours doing tasks using AI, that's actually kind of an interesting part of training. So even if you know exactly the result of the experiment, perhaps running those experiments may pay for itself simply because it might be very effective training. Or maybe not; maybe there are better ways of training people. I don't know.
Tkac: All right; we're going to go to some audience questions here: "Will the risk of AI hallucinations ever go away, or is there an inherent amount of inaccuracy in these models?"
Schwarz: I love this question. Obviously, I don't know the answer, but I think I do. So, I think it will go away, and I think it's going away relatively fast.
So, let me give you an example of a paper that my colleagues, computer scientists from Microsoft Research, recently published. So, they did an interesting experiment. They took a bunch of questions that people ask AI, and they said, "Okay, we're going to use the same standard, Azure OpenAI service"—so, there's no new AI technology; it's exactly the same as your OpenAI service, the service that any of you can use with a swipe of a credit card in Azure.
And what they did is, they said, "Let's see if we can reduce the amount of hallucinations by the way that we humans tend to reduce the amount of our hallucinations: let's have a committee." So, they would ask the same question multiple times. And then they'd say, "Okay, here are the answers. You, AI, the same service—here's a question, here are 10 answers. Read all of them, and find the best five." Okay; done. Here are the best five. They say, "Okay, now take those best five answers, and integrate them into one best answer." Done.
Suddenly, performance significantly improves, because hallucinations are significantly reduced. They're nowhere close to being eliminated, of course, but they're much lower. So, it's kind of fascinating, and it's fascinating to me for multiple reasons. First, it shows that even with a super simple trick, you can significantly reduce the amount of hallucinations. It will cost you a pretty penny, because your computational cost just went up ten times, right? So, are you willing to pay ten times more to reduce the risk of hallucinations?
Actually, let's do a poll. Raise your hand if you're willing to pay ten times more to cut hallucinations by half.
Unknown audience member: What's the base; what's the baseline?
Schwarz: Oh, excellent question. Okay; so, the current price for consumer Copilot is $30. Actually, there's a consumer version for $20; the enterprise is $30. So, let's say, enterprise—$30 a month. So, if we go to $300, we cut hallucinations by half. Who's in?
Unknown audience member: What's the baseline rate of hallucination?
Schwarz: That's an excellent question, and the answer to this question does not exist, because it depends on the kind of question that you ask, right? So, you can create a universe of questions that are extremely hallucination-prone, or you can create a list of questions that are not. So, baseline rate is, it depends on the population that you draw from. But you all have experience with AI, so you know what your personal baseline rate of hallucination is. It depends on the questions you ask.
But I think the bottom line there—I want to come back to this question. So, even today, by spending more money, you can reduce the amount of hallucinations. Obviously, the technology is moving so incredibly fast, and things change so incredibly quickly—so, it's not like we'll have to spend 10 times more in order to cut at hallucinations by half.
I think that over time, those things can become cheaper. There are tricks that people invent, there are optimization layers that people are building as we speak. Remember, AI wasn't...everything moves so fast, nobody had time to optimize anything. So, once you start optimizing things, hallucinations will keep going down and down, right?
So, for example, today, AI makes a very common hallucination that you can experience today. You ask a question, it produces a reference, you go to the reference. You open the book, you open an article, and it's just not there. It's a pure hallucination. So, obviously that kind of thing could usually, with enough infrastructure and effort, you can create another AI agent that would specifically check if this reference is correct or not. I'm not in the engineering side, so I have no idea what the engineering pipeline looks like for AI, but I would be absolutely shocked if people are not building something of this sort.
And of course it will come, so I would expect hallucinations will go down and down and down. And the analogy that I like is, in the early days of car manufacturing, it was presumed by everybody that it's an intrinsic property of a new car that it breaks down all the time. So, I don't think we have anybody in this room who is 90 years old; but if we did, and if that person was a car owner in his 20s, he (or she) would remember.
And it just seemed to be an absolutely intrinsic property of the car. You buy a car, of course it breaks down, and of course you need to fix it. By the way, early cars you had to fix about...for 10 hours of driving, there was one hour of fixing. That was a normal, typical ratio. So, think of this as like a "hallucination."
And today, you buy a car and it's a lemon if it breaks down in the first 100,000 miles, right? So, I think that it's a pretty safe assumption, I think, that AI would travel on the same trajectory as the automotive industry. So I think that if there's enough focus on reducing it, that eventually people will get around to getting it right. I think it will be a lot lower.
Tkac: So, in 2023 we all were introduced to generative AI, and now we've got a whole bunch of questions here about agentic AI. So, what are your thoughts? We have questions here about: "How do you think about oversight of risk with agentic AI?" "How do you define it?" "How useful is it, currently?" "What kind of use cases are there?"
Schwarz: So, I'm very excited about agentic AI. I think it's going to be very important; there's no question about it. I think there's a little bit of a question of definitions of what we actually mean by agentic AI. I think that a lot of things out there that are called "agents" that we may or may not call "agents." So, what's the difference between an agent and a chatbot? An agent could take actions on your behalf. Okay; so, if my site has a chatbot that you can talk to, isn't that chatbot talking to you on my behalf? So, isn't that chatbot an agent?
Well, obviously not; but then, what's the definition of an agent? What's the threshold? I think that, obviously, the more autonomous the thing is, and the more it makes truly independent, important decisions, the more agentic it becomes—and the more careful you need to be. So, that's why in those early stages, most agents that we're seeing are kind of on the boundary of even not being agents—well, not true for some, of course.
But I think agents are super useful; obviously, simple things that could be reliably done by AI, should be done by AI. It's a wonderful thing, right? So, maybe many of us have the pleasure of occasionally approving expense reports, approving certain transactions. So, could some of those things be taken care of by AI, where it's analyzing, and it's like, "This thing is super safe"?
And to be honest, it already has a label that it is probably safe, so probably the approver would approve it without carefully reviewing anything anyway. So, when you put an AI to do it, maybe it would actually do a better job, and a safer job, than human approvers. Nothing much could go wrong there.
So, obviously when you go to other use cases, that could be much more dangerous. But I think that, obviously, it's extremely important to be very responsible, and if there's any meaningful risk at all, you need to be double sure, and triple sure, that AI would be...nobody is perfect, right? So, humans make mistakes, too. So, I think the threshold for when we want to allow AI to do a certain task is when we are certain that it does a safer and better job than a better-than-average human, who is normally assigned that task.
So, I think it will be a huge thing, and I think there will be more and more of that.
Tkac: So, when I think about the risk of agentic AI—or just, generative AI being used in the workplace—what I often hear from folks is, well, the AI output, because of hallucinations and all the things we've talked about, may not be perfect, as you just said. So, how can we think about getting the appropriate amount of reliance on AI output, so that we get the benefits, but we don't maybe necessarily take too many risks?
Schwarz: So, I love this question. I don't think this question is specific for AI. I think it's really a question about automation; and if you think about it, AI is, at least today, by far not the most daring example of automation that humankind ever did. There's a thing called "autopilot." In fact, the reason that we call Copilot, Copilot—maybe it has something to do with the existence of autopilots, right? So, planes are basically flown on autopilot. They're operated by a machine. It has nothing to do with AI, but it's an autopilot that has your life in its hands for each of you who came here by plane.
And of course, we have human pilots that are monitoring the autopilot; that's very, very important. One of the issues that are just intrinsic in automation, is that A) you want to make sure that there is effective human monitoring—and it's actually really challenging, but fortunately for us, those challenges the industries were facing for decades by now...and there are two challenges there, right? One challenge is, if something is automated, and it's a relatively difficult task like flying a plane—there's a risk that humans would simply lose the skill. You're out of practice. If you're always driving your car on an autopilot, you will become a really, really bad driver, so when it's time to take control, you may not quite know how to do that.
So, of course, we all fly and we feel safe flying. So, airlines solved this problem, and they're very thoughtful about it, how they make sure that pilots are actually well trained. You need to have protocols and procedures to make sure that humans retain those skills. And it's going to be a huge thing with AI as well.
And then there's another issue; if AI is extremely, extremely good, and we humans are a careless bunch ... so, when you see that something really flawlessly works for years and years, and you're the guy monitoring it, you get to sleep at the wheel. And in fact, I believe that there was only one fatality from self-driving cars today, and that was exactly that kind of case. It was, I believe, an Uber self-driving car, and the backup driver—who was in the car—was sitting there; but the thing was, again, she was reading something, and well, that was a failure of creating a system where there are incentives for a human to pay attention.
And of course, that's not unique at all to planes and cars. We all go through TSA, and the guys who work there, their job is to catch terrorists. You can work there for 40 years, and you would never find a bomb, and you would never find a terrorist; that's a normal career of a TSA officer. So, how do you get those guys to pay attention?
Well, TSA figured it out. It's not rocket science; they have drills. Once in a while, there's someone from TSA carrying a bomb through security; and if they don't catch that guy, they know that they weren't paying attention. So that's how they keep their people paying attention.
I think that with AI there will be the same kinds of challenges. You need to have humans who monitor things that are high risk; and if failures are very rare—or even if the failures are frequent, but particularly if they're very rare—you really need to focus on having some system in place that would ensure that humans are paying attention, and that they're skilled.
Tkac: All right. So, a question about some new research. New research shows that AI models tend to repeat human behavior. For example, they engaged in insider trading, despite being told that it's illegal. So, their response to incentives is apparently similar to ours, and what are your thoughts on that?
Schwarz: Wow, that's a fascinating question. I didn't expect this question. So, first of all, I think that if you were to ask AI why did it do that, it might—I'm not sure if you can say AI thinks, but it's not like AI deliberately did that. It's just kind of an inadvertent thing where it forgets about the rule, rather than deliberately does something malicious.
So, I'm not sure what to say about it, other than the fact that I think AI is probably a lot less prone to engaging in malicious behavior than we humans, because when we are told to do or not to do something, we also have our own incentives and agendas. And the only reason that AI deviates from what it was told is because it has conflicting objectives and it doesn't quite know how to balance them correctly. It was probably told to make as much money as possible, but hey, by the way: don't do this, don't do this, don't do that.
So, when a human being crosses the line, pushes the boundary, sometimes we do it accidentally. It's very common, right? We didn't intend anything by it, we just accidentally crossed the boundary as we were busy, right? You're running a race; you're not a cheater, but you're running very close around a circle, and you cut a little bit on the grass, and you get disqualified, and you didn't do it on purpose. You're just trying to go as fast as possible, right?
So, I think with AI, it's kind of like a runner who accidentally gets on the grass, rather than a runner who deliberately does doping.
Tkac: Okay, we're going to ask the most popular question this evening: How is Microsoft adapting its staffing model and general demand for labor?
Schwarz: How does Microsoft what?
Tkac: How is Microsoft adjusting its demand for labor, given the AI-led boom in labor productivity?
Schwarz: Well, I think that, for example, in our service organization—obviously, like any other large company we have—oh, I'm saved by the bell; but let me finish answering the question. So, like any other organization, we have a huge service department where people submit all kinds of tickets about all kinds of things. And they started deploying AI, and they had significant cost savings; and some of the people were resigned to other things that ... they're very busy, right?
We are at the center of the AI revolution, so we always need more hands for a lot of things we want to do. So, when hands free up, it's obviously a good thing; there is usually a good use for those hands (but obviously not 100 percent of the time, like in any other company).
Tkac: Okay. Well, thank you. This was a fascinating conversation, and I'm sure there's more after we adjourn. Thank you, Michael.
All right. So, thank you, livestream folks; we will see you again tomorrow morning. All of you here, please join us for breakfast starting at 8:00, and our first session of the day starts at 9 a.m.