John C. Havens, executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, talks about the transformative possibilities inherent in artifical intelligence and related technology.
Marie Gooding: My duty tonight is to introduce to you our keynote speaker, and I think you're in for a treat. We're going to add a dose of positive psychology to our topics of economics and emerging technology. I'm excited to introduce John C. Havens, who is our speaker tonight. He's going to help guide us through a human-centric road map into the future. John is the executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.
It goes on to page two. This initiative encourages experts to create autonomous and intelligent systems to prioritize ethical, values-driven considerations so that the technologies benefit humanity. We did talk a little bit about that today, so this should be very interesting. The IEEE global initiative recommends standards through its document, entitled Ethically Aligned Design: a Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems.
John also founded the H(app)athon Project. It's a nonprofit that uses psychology and emerging technology to increase humanity's well-being. He also writes about technology and well-being for Mashable.com and the Guardian. He's the author of Heartificial Intelligence: Embracing Our Humanity and Hacking Happiness: Why Your Personal Data Counts and How Tracking It Can Change the World. I think you're in for a treat. I told John that his main objective is to make sure the table discussion tonight is lively. So he's going to challenge us, I know. Please join me in welcoming John Havens.
Havens: Am I good? It's going to be lively, but I changed my talk. I'm actually going to describe Immanuel Kant in German for four hours. Is that cool? Because I'm before dinner. All right, this is a good crowd. All right, I'm going to start with a joke. My son's 15, and I love my son's jokes, because he's so dry and...
All right. It's a little bit morbid, but don't take it for morbid. It's only the setup of the joke that's morbid. So a woman, enjoy, a woman loses her husband and she has an open casket. And a guy comes up and he's very respectful, and he says to the woman, this widow who's lost her husband, "Can I say a word?" And she says, "Sure, go ahead." And he says, "Plethora." And she says, "Thanks, that means a lot." Love that joke. And my son tells it and he nails it every time.
Anyway, thank you so much. I'm honored to be here. I also know that I'm in between you and dinner, so my talk is 17 seconds long, enjoy. Ethics are good, okay? I'm going to talk about design, data, and development tonight. And thank you so much, I know my bio is lengthy, so I'll skip all this stuff, except to say, IEEE is the world's largest technology association, and I know we have, tell me your first name again, I'm sorry. Pat? Pat is a member of IEEE, give it up for Pat. Thank you. Because we're being recorded tonight. IEEE, the world's largest technological association. I'm just kidding, but I should say I'm speaking tonight as John. What I say doesn't necessarily reflect IEEE as it were, but what I actually run is a large program focused on ethics of autonomous and intelligent systems. But not ethics like, hey, let's talk about utilitarianism for 17 hours.
It's more, how do we actually think about the big issues that these amazing technologies, like your conference, what you're doing here, what they're going to bring to humanity, and prioritize not just thinking about the ethics, but also using applied ethics, or what's called values-based design, so that what you create is actually something that's aligned to how we actually want the values for society to be built into what we do. So I'll tell you more about that in a minute.
This is an eye chart, you'll be tested on later, sorry. It's very small, enjoy. But IEEE, one thing they're very known for is standards. For instance, the Wi-Fi standard is an IEEE standard. Of course, you use it, awesome, and we're very excited along with this paper that we put together, called Ethically Aligned Design, which was mentioned, by 250 global experts, not just from the States, but from China, Japan, Russia. It's very cool. We've been the inspiration to create 13 standards working groups. And these are now the largest sweep of standards, things like algorithmic bias considerations. There's about four about data.
And the logic is when these are created they're used because people actually know 50 people got together for two years and talked this stuff out. And it's used because it brings value. So, anyway, all this stuff by the way is free. This is my pitch. It's drawing to a close. If you want to talk to me about it after, love to have any of you join in the work, because again, you also don't have to be an IEEE member.
Okay, so one thing I think is very important in a dinner keynote is to try to entertain. So we have a couple of things. First of all, and where's my buddy who's a big fan of Monty Python? I'm already forgetting names, forgive me. Where are you? Is it Mark?
Havens: Mike, all right, I knew it was Mike. Give it up for Mike, because he's a fan of Monty Python. Anyone else who's deranged like Mike and I? Like Monty Python? Monty Python? All right (singing). All right, you got some whistling over here.
All right, I need some help, and who's the person who's going to help, who's been so helpful already with this entire conference, let's get Lisa up here. Lisa, let me get you up here. Lisa, Lisa, Lisa. Okay, here's how you know how much of a dork I actually am. Is I wrote the lyrics, if you know the "Philosopher's Song" from Monty Python, yeah, okay, there's an Australian, they're all named Bruce.
I wrote a song using the same tune, but I rewrote it with all the AI ethicists' names. So, again, huge dork, and hopefully you'll recognize at least a few of these names. I should point out that I wrote this song two years ago, to the late, great Stephen Hawking, who is mentioned, that's why he's mentioned. I'm not singing about him, now that he's dead. I'm singing about him from something over two years ago. Okay, here we go. Can you please advance the next slide? Let's give it up to Lisa for advancing the slide. I hope you can read this, apparently it's also on your app. Okay (sings parody song).
Thank you very much. I did not clear with anyone the use of the word screwed, I hope that's okay.
Enjoy. Let's give it up one last time for Lisa. We're going to applaud for her all evening. All righty, I also have a harmonica I can play later if you want, upon request. All right, I'm going to talk about three things. Design is first. I'm going to talk about how AI ethics is transforming engineering. And I asked this question in my book, artificial intelligence, and I am a big fan of puns and double entendres, but this is genuine. How will machines know what we value if we don't know ourselves? AI, by the way, is like any technology. It's not good or evil per se, but it's also not inert. And as was mentioned, I founded a nonprofit organization, and this was something we did using positive psychology, we gave people the values survey, to have them track their values. And you might say, why would you track your values? And my question is, have you ever tracked your values?
And one thing certainly economists know more than others, and most people whenever I do this talk for the past four or five years, in general, what you probably know is, if not down to the penny, how much money you have in your checking account, your savings account, and that's important. You should. But do you know how much time you spent with your family? Or, if you like being outdoors, how much time have you been outdoors? These are good things to know if these are values that you want to live to. The science of positive psychology, it's an empirical science, it's been around for about 20 years, it says, and it's not really rocket science, "If you don't live to your values, your well-being decreases."
So these 12 words. I know they're a little hard to see from where you are, but there's words like family, nature, spirituality, these are 12 terms, created by a famous psychologist named [Shalom] Schwartz, last name was Schwartz. And these are common values around the planet, right? Everyone has a family around the planet. How you think about them is different. But what we did is, we started off this survey, we got a bunch of people to take it. And we said first of all, do a values level set for your life, using these 12 words. There's no right answer. But on a scale of 1 to 10, nature. Do you dig nature? I like using the word dig, shows you that I'm 49, enjoy. Forty-nine-years-old, yes, but nature. Say you go outside all the time, and you really find pleasure being outside, well then nature would be a 10.
Say spirituality, you're agnostic, you're atheist, and so spirituality might be a 1. But first of all, one thing people told us when they took this survey, is they said, "No one ever told me that I could actually track my values. And secondly, it wasn't until I did these 12 things in unison that I got a sense of which things I was prioritizing more than others." And we said, "Great," because that's point of the survey. And what we did is every day at the end of every day, for about three weeks, we emailed people, and said now with these same 12 words, on a scale of 1 to 10, did you live to those values today? So if you were outside all day, that would be a 10. If you were indoors all day, it was zero. Right, it wasn't overtly...you weren't working too hard on the empirical nature of it. It was more just a sense of reflect on your day, which is also a positive psychology kind of gratitude, kind of meditation thing, but you just don't tell people that because it might freak them out.
But then what we did is at the end of the three weeks, we simply just tallied up your 21 days of data, and we said the word nature. When you started this survey, you said nature was a 9. That it's very important to you, that was your reflection about your life and your values. But over the course of three weeks, if it's a normal three weeks, and you're not traveling or whatever, you actually said that nature was only a 3. That was the average. And the question then becomes, it's not about judging, it's not about condemnation, it's an opportunity to say what are the values that you actually hold so dear that you live to, and if not, is that a reason then that you weren't increasing your well-being?
A great result from it, the same idea, and the idea of the theme of value, is I really want to bring up...and I meant to ask this, do me a favor, just because I know it's before dinner, so you're tired, name some values. Just yell out a word of a value for me.
Audience member: Honesty.
Havens: Honesty, great. Anybody else? I'm going to focus on this area the second half of my talk.
Audience member: Trust.
Havens: Trust, good one. Anybody else?
Audience member: Excellence.
Havens: Excellence, that's a good one. Grit, generosity.
Audience member: Love.
Havens: Love, oh, very nice. You should sit at this table. Yeah? Empathy. Okay, so what I've found when I've done this talk, if we had time, and it was a workshop, I'd grill down, and I might ask a number of people to stand up and say all of the values they live to. If you're like me, when I first started this work, I get to like four or five values, I'm like faith, my family, integrity, honesty, "Game of Thrones," cheese. I'm like, "Is ‘Game of Thrones' a value? I'm not sure if that's a value." And, again, one thing I hope I can leave you with tonight is, if as individuals, we don't even know what our values are, and you're the economists, what you track, what you measure, what you count, what you don't, you don't. Are we not worth asking what our values are, and trying to live to them?
By the way, I forgot, tell me your name again. I borrowed your guitar. Chris. Give it up for Chris, because he let me use his guitar. And, Chris, like I said, I'm really sorry about that horrible sweating problem I had. I'll buy you a new guitar.
So in terms of engineers, I'll go through this quickly. Values, you may say, how can you track values? Something as complex and nuanced as values. Well, there's a methodology called value sensitive design a wonderful woman named Batya Friedman created 20 years ago. And what this methodology does is it goes beyond...engineers, by the way, the codes of ethics they've had since the time that they built the aqueducts. Amongst engineers there's sort of a joke, thankfully it's not actually a ha-ha joke, it's a thing they say, which is you don't build a bridge to fall down. Engineers, you don't have to tell them about ethics, in terms of safety and risk.
However, when it comes to AI, really addressing aspects of our emotion and our agency, what we can do now, these methodologies, and when I used to work in PR this is the type of work we did, is you say how is the end user, what are their values about this product or system that I'm going to build?
So say you're building a phone, and you talk about privacy, this is not you the engineer, or the programmer saying, this is how I feel about privacy, it's the mom who's going to use my phone. How does she feel about privacy? And then what you do, once you identify what the values are, you can actually start to create, and engineers love to do this, long flow charts and lists, value dens. What will keep, say someone's really concerned about RFID [radio-frequency identification], or certain types of surveillance, well, if that person is concerned, then you have to build a phone that would have a privacy setting that would allow them to choose that option that reflects their values.
Now what's exciting, is from a business standpoint, if I build a phone with four of these switches, and my competitor only has two, people will buy my phone. What's exciting from the standpoint of honoring people's values is to even ask the question, what are the values that they hold dear, you actually have to listen. And especially culturally, what's really interesting is how many people build stuff, and the unintended consequences where it's not like they're evil. They're not like, "I'm going to make tons of money and kill people." That's easier, you're just like, "You're evil. Utilitarianism, you're good, you're just evil."
But a friend of mine named Edson, who is a roboticist from Brazil, about a year ago he pointed out, look, if you build a robot that's got something like eyes, the morphology looks human, in America, in general, the robot eyes would look in a person's eyes. But we work with a ton of people from Japan and China, and in general, not always, but a lot of times eyes are downcast as a sign of deference, of respect. So you're like, "Hey, I'm an American. Build me a robot," and the eyes are like this in someone's house, they're like, "Oh, it's freaking me out."
These are culturally values-oriented questions. But they also affect market decisions. So the questions I'm asking about, do you know your own values, also the big question is, do you know other people's values? Okay, part one, and only two more parts. This part is about two and a half hours. Sit back, enjoy.
I've written a lot about data. And data, we've been trained, especially in the States, we hear this all the time, what do you have to hide? I hate this question. I hate it. Because you remember this, did you see this two-act play that happened in Washington, DC, a few weeks? Now I'm not going to pick on [Mark] Zuckerberg or Facebook per se, but someone asked him this question, what hotel did you stay in last night? He didn't answer. "I'm not going to tell you what hotel I stayed in." I don't know why he sounds like Bullwinkle there. "I'm not going to tell you..." But he didn't answer, and it was a great question to ask him, because what do you answer, he's a famous guy that's going to get attacked. He's not going to tell you I'm at the Sheraton. But the question, what do you have to hide is the wrong question. And I want to be very crystal clear about the question we all should be asking moving forward in the algorithmic era. The question is, what is ours to reveal?
And here's the thing, you want some AI ethics examples, there was a big thing in the AI ethics world...by the way Matt Scherer's here, who's my friend, he's on one of our committees, so just giving Matt some love. Amazing lawyer, who's talking tomorrow. You can clap for him. And did I mention Lisa Fogarty. I did, okay. About two months ago there was a thing, and I'm only using this term because it's the term that was used, so forgive me, it's pretty offensive, called gaydar. Did you hear about this?
There was a technology, and actually what's kind of interesting is the two guys that released the study released this to show the dangers of surveillance, in the sense of, they found a technology using facial recognition technology could supposedly recognize the physical characteristics of gay men and women based on the shape of their face. And again, you've got to read the whole history, because what was really interesting is they're actually trying to say, this is ludicrous what we're doing, be aware, however, the technologies that people might put out there. They can say, "Look, this person's gay."
Let me ask you this? Who here are parents? Raise your hand if you're a parent. All right. Awesome. How would you feel if instead of your child coming to you and saying, "Mom, Dad, I'm gay," which is a pretty huge moment for a person's life, if that's something that is part of their life. Versus I went to school today, and someone treated the fact that I was gay because this algorithm identified that I was gay.
Privacy is not just about, "Oh, I shouldn't say this," or whatever else. It's about who we are. It's our identity, and who should get to make something, that decision about saying something like that? Who gets to say, this is my faith? I'm Jewish, I'm agnostic. You do. That's your values. It's not that you're hiding anything, you're revealing.
So I also bring up, we have got to move beyond the idea of consent with regards to data. And you can read this, usually people start laughing halfway through my talking about this. In the UK a couple years ago, they put out a free Wi-Fi stand, in central London, and they said, "Free Wi-Fi," but you've got to sign the terms and conditions. So people went to it like bees to honey, whatever the analogy is, a bear to honey. And over the course of about six hours, apparently of the 600 people that came up to look at the Wi-Fi, "Yeah, I want to check my email," 60 people signed this, which if you can't read it, it says, "In using this service, you agree to relinquish your first-born child to F-Secure, as and when the company requires it," etc., etc.
So a lot of people when I give this talk, they're like, "Yeah, but if they were stupid enough to sign the terms and conditions, it's their own fault." Really? Really? Do you want your child to be the one that's stupid enough to sign something? And more importantly, is this actually consent that builds trust, between banks and people, between brands and people? Or is this something new? Well, good news, it's something new.
So if you're not familiar with the world of the personal data ecosystem, and also the identity ecosystem, I'll give you some things to check out. This is one organization called Sovrin, obviously it's a takeoff of the word sovereign, but based on block chain technologies, it's very exciting to note, and it's not don't go to the place of currency, cryptocurrency is, and you have probably maybe already spoken about this at the conference, cryptocurrency, bitcoin, that's a whole other can of worms, what it's built on is the idea of a distributed ledger, and the logic is when you share data with someone, here's the shocking idea, you don't have to share every little bit of information about you since the dawn of time.
If you want to say, for instance, know that I'm John Havens, and I want to use your credit union, so I found this, I was researching this talk, American Banker, I don't know if you guys like it, but hopefully you do. Anyway, credit unions are looking to block chain to solve a lot of digital identity issues, which is really exciting. But the technology what we can move beyond is, so think about consent, right, is when you sign the terms and conditions, there's a legal term, Matt, another lawyer who's in the room might know it, it's called a contract of adhesion. And what it basically means is, my way or the highway. And is that how we want to build business?
Facebook, it's a great service, but the logic is, you sign these terms of conditions, or you don't get to use Facebook. Well, a lot of India, the only way they can actually even get online is through Facebook. A lot of Africa uses digital currency through Facebook. There's Facebook now, and, again, I'm not trying to pick on Facebook, these types of services have the opportunity to adopt this type of block chain identity structure, and let people share and reveal the aspects of their data they want to share. By the way, just so you know, this is not like dark web, this is not you hiding from whoever. We're going from left to right. Left is the aggregated model where we are now. See that person off to the right, the little black silhouette? That's us. We're not at the center of our data, because guess what? If want to find anything out...by the way, I was an EVP of a top 10 PR firm, so I speak from experience.
If I want to find out information about your behavior, I can tell, and this was four years ago, there's a company called Palantir that everyone goes to. It's an amazing company, but I remember reading a paper four years ago that said, "Palantir can predict your behavior, like your consumer behavior, 95 percent accuracy for the next four months". We're creatures of ritual. "I like Starbucks on Monday. I like this…" right. And that's fine, meaning your rituals.
But the opportunity, this is not antibusiness, this is not anti-I, this is pro all of these things, but putting a person at the center of their data means, you can still get tracked. You can go on Amazon and say "I don't like horror novels," cool. But you also get to say at the middle of your data, "I want my data shared this way with medical practitioners." And the only people that get this type of most intimate medical data are my doctor or my family. If an employer wants that medical data, they're more than allowed to ask for it, but if I don't really know what they're asking for and I give it to them, now maybe I won't get hired because I gave them material information that I didn't realize I was giving them.
This is about clarity in the algorithmic era. This is not about hiding. This is about saying and understanding, and again one of the things I hope you take from me is, your data, our data, is not a commodity. Your data, every little shard of information that you give off, is a reflection of who you are. You might say, "Well, I'm just giving information about my shoes. I don't think that...this guy seems kind of intense."
But I mentioned the nature thing before, right? Timberland used to be a client. If I knew what type of shoe you like and it's a boot, my guess is you probably like being outdoors. I know something about you. By the time I take two pieces, or three pieces of data, certainly if it's your PII, your information most precious about you that I know, whatever, your address and all that stuff. But by putting those type of things together, I get to know who you are, and that's okay, right. Again, I'm not antibusiness. I used to work in PR, wonderful clients, P&G, HP.
But the amount of things that other people know about you, that you don't have access to, and the amount of things that you could use for your own life to improve your relationships with credit unions, banks, economists, whoever. This is where we can go, instead of just being tracked. We can also bring parity, or symmetry, for that relationship.
Okay, part three, because, again, I know I'm keeping you from dinner, enjoy, is development. And this I want to say I was also asked to do a Q&A, and if that's something you still want to do, seriously thank you for having me, I'm honored to be here. Because the last four or five years I've written a lot about economics, but not from a standpoint of the level you are. More as a person who's fascinated with this idea of values, and tracking at the individual level what is important to me, and what's the macro level. So these things are designed to be both provocations for you to talk around your dinner tables, and then questions to me. And I hope it's helpful as a person who's reflecting on economics in the last two or three years, in circles where most of the events are from AI engineers and ethicists. So hopefully, that's a new perspective for you.
Also, I've had the blessing of being able to be in rooms, like the EU [European Union] Commission, the OECD [Organization for Economic Cooperation and Development], been in China, in Tokyo. And you get to really hear what is the American view versus the European view. Again, so I hope this is helpful. So first of all, information you probably know, some stats. PricewaterhouseCoopers said that global GDP, this report came out at the end of last year, could be up to 14 percent higher in 2030 as a result of AI. It's the equivalent of $15.7 trillion. Yay, cool, $15.7 trillion. Here's the bad news. Stiglitz...I know you can't read this at all. I can't either, wow, I'm getting old, 49 years old.
Anton Korinek and Joseph Stiglitz just came out at the end of last year, too, the bad news about the $15.7 trillion is the economic equality aspect of this. The distribution of that $15.7 trillion is not universal, holistic for all humans in the planet. And this is a quote from the report, "Economic inequality is one of the main challenges posed by the proliferation of AI and other forms of worker-replacing technological progress." They said that, "We may be heading toward a period similar to the Great Depression, when agricultural innovations meant fewer workers were needed to produce food." And then finally, "A lot of the relevant skills will mean a vast majority of the workforce are unprepared to fill new jobs." So these are things you've probably heard.
This is something I found over the weekend. I hadn't thought about it before, and every conversation you've probably had in the past couple years, people say, "Well, it's cheaper to have a robot or an algorithm at a position, because you don't have to pay them health insurance." And all the headlines that say, X technology has now replaced humans, X technology does Y better than humans.
And I've sort of been thinking, well, when it does it better than humans, it's going to be like Deep Blue, or chess, or something really impressive. And this guy pointed out, not being an accountant hopefully the numbers are right, but he says, "It's actually a machine or algorithm, it doesn't have to be 100 percent as good as humans, just 34 percent." That's good, like amortization and taxation, and you're buying something and it becomes part of your ledger, which is you own it.
So I'm bringing all this up to talk about values. In the sense of, we're at this really critical time in actual human history. It's not really hyperbolic. In terms of you hear these things, and the big terminator headlines, killer robots will kill us. It's not killer robots. It's economics. This is why you are the heroes that I look to in the sense of how we can change what we look at for what we value. And so, I want to give a couple of provocative things. I hope this is helpful. I'm smiling because I intended these to be funny, so hopefully they will be.
But I want to give you some phrases that you will not hear, and take it from me I read everything, every Google alert, AI, AI ethics, and this is my work for the past five years. Things you won't hear said about AI. First is AI automation means all of the tedious jobs will be done by machines, so people can pursue more meaningful jobs. You hear that a lot. What you don't hear right after is this statement. So here's the website to go to have your bills paid and your reskilling paid once your jobs are replaced. Can I get an amen? All right, I'll hang out with you guys after, and avoid you guys after.
But what's also interesting is when I go to Europe, and I've had these same conversations and universal basic income [UBI] comes up. We were talking about this before, me and Chris, and when you talk about UBI in the States, depending on where you are, they're like, "Oh, so you're a communist. "I'm like, "I don't think I'm a communist." I wasn't aware. But in Europe, you're like, well, what if people could have health care, and universities were paid for, and they're like, "You mean Denmark? That's where we are."
But I was an actor for 15 years, a majority of my friends don't have health insurance. Maybe that's a world...if you've had health insurance and jobs, awesome. I do not want you to go through the stress of what now most of the middle class goes through, where if you get like a $5,000 dental bill…I'm not talking about your love was in the hospital for a long time. That destroys a family. I'm a little passionate because I'm like, "Is that cool?" Yes? No? Anybody? Can I get a no? Anybody?
Havens: This side's totally winning. I'll give you one more chance. Is that okay?
Havens: You've got to be passionate about it, okay. It's okay. Maybe I'm going to hang out with you. Other phrases you won't hear. We think there's a number of AI business sectors and job types that AI shouldn't automate. Okay, please don't. All right. Can all the companies working in these sectors please stop working on AI and automation, even though you're leaving a ton of money on the table. Because people are freaking out. Thanks very much. No. Lawyers, right?
I interviewed a lawyer four years ago, where he's like look, because I was doing an interview for Mashable, it's like TechCrunch, about automation. He's like, "John, what do I do? I have 10 young people working for me, there at the first stage of associates, lawyers, paying about $50,000 a year," and this was four years ago. "I can buy this AI machine learning software for about quarter of a million dollars." Now it's probably about $17, but four years ago it was a quarter million. " I fire all these people". I feel bad. I pay $250,000. I increase productivity by 70 percent, and I minimize error by 85 percent," because these poor people are looking over books. He's like, "What do I do?" That has stuck with me. Because I'm like, "Dude, I don't know." I don't know.
Here's the other phrase you'll never hear, even though there's $15.7 trillion to be made by 2030, we're not sure how the question of AI automation will work out, so could all the companies working in this space just take a couple years and slow down, so policy can catch up and we can make sure that everyone's taken care of, we can prioritize well-being, cool, cool, cool, no.
This is the phrase that rules all. This phrase, I cannot tell you how many times, talking interestingly for an hour and a half, someone from Russia says something, Europe says something. Then a hand goes up, "Well, don't hinder innovation." Done. Yeah, you're right, don't...and the evening's done. Those three words, don't hinder innovation, is another phrase I hate. Why? Because why is innovation only defined in an exponential growth market mind-set? And this is where you're like, "He is a communist. Dear God, we invited this..."
I'm not. I'm asking the question. This is what people really mean what they say, "Don't hinder innovation." They mean don't mess with my money, which is totally cool, just say, "Don't mess with my money." I get it. I'm a dad. It's not wrong to say I'm worried about my business. But I asked a guy who's the chair of our group. I'm the executive director, he's this well-known, famous roboticist. I said, "Innovation, when is it actually not good to hinder innovation?" He said, "Well, look academics, when they're back in the lab, kind of working on stuff, blue-skying, or entrepreneurs same thing." At that point if you walk in the room and you're like, "Hey, what are you guys doing?" That's hindering innovation, right? Let them explore, let them discover.
However, when there's a technology that's ready to go on the roads, for instance, like the Tesla situation, where the firmware upgrade happened from A to B, and one night cars that people bought, as it said automated, and they're geeks and dorks like me, and they're like, "I can't get in the back seat and the car drives." And there's videos of a guy in Europe. It's a 15-minute video, it's one of the most terrifying things you can watch, where the company, I'm not trying to pick on Tesla, but the company said, "Hey, by the way, don't sit in the back seat. Make sure to sit in the front with your hands on the wheel." Really? Really? Really?
I built this car for you. It's autonomous. You paid $30,000, $40,000, it's gorgeous. Have you tried a Tesla? Riding in a Tesla is like riding in a...can't think of anything good without getting into a dirty joke. It's just glorious. It's a glorious machine, but you buy it because you're like, "Dude, this is semiautonomous."
But that car is on the road, you watch that video. I watch that video as a dad, and maybe, "Oh, I'm in ethics," and "Oh, you're annoying." Fine, I'm annoying. The person that got in the vehicle and said, "I'm going to risk my life sitting in the back seat," it's not my choice, but go for it. The families surrounding that car as it rides down the street in San Francisco, they didn't get asked. Where we're essentially being tested on, that's where this phrase drives me crazy. Don't hinder innovation, move fast, break things. What we're breaking is ourselves, not just safety. We're laying a precedent for every other technology to give permission to let us be tested upon. Winding down.
This is something I said in a conference in Tokyo, a good friend of mine from IBM, I made this point. It is not fair for society to say to corporations, on the one hand, every quarter, when you go in and the door's closed, maximize shareholder value. And, by the way, nothing wrong with profit, nothing wrong with making money. I mean exponential IPO type of stuff. Maximize returns, and by the way, please protect people's jobs. We feel bad. People are going to get fired, awkward. No. A or B. This is me, right? This is where I love your thoughts here. This is, I think, a solid truth.
Unless we move, by the way, I'm not anti-GDP, I mean GDP plus, I'll say more about that in a second. Unless we move beyond this exponential growth, maximizing shareholder values at that same exponential level, not just growth to deal with inflation. There's no valid business reason to not automate every human job as quickly as possible. And I'm not saying that to be flip. I made those questions very carefully to say, what you will not read is people saying, "Hey, the legal industry, people are freaking out. Please stop automating." Done. The medical technician industry, done. Manufacturing, robots, done. Call centers, when's the last time you spoke to a human when you got your airline?
Am I saying these things are bad? No. What I'm saying is in aggregate, what the message is, is automate as much as you can. And not because they're not offering good services, they are, but largely because of GDP. So here's my question. And you again are the experts, and don't mean to task you with this before dinner, but can we innovate innovation?
I'm a huge fan of Stiglitz. The Stiglitz commission report from 2009, that actually says it's easier to measure well-being than productivity. That's his quote. If you're upset, please take it up with Joe. I don't know him, but you probably do. I don't know if I can call him Joe. I get to meet him in South Korea at an OECD thing, and he's like Eric Clapton to me. I play blues, so he's like Eric Clapton to me. But he pointed this out. This is two years ago at a World Economic Forum event. GDP's not a good measure of economic performance. It's not a good measure of well-being.
If you're a business geek like me, this is Michael Porter. This generation's Peter F. Drucker. In a seminal Harvard Business Review article in 2011, he created the idea with the term of shared value. Shared value means that sustainable businesses, something like, for instance, he uses a famous case study, where there were coffee growers in Costa Rica whose living conditions were squalid, and the company said, "You know what? Here's a crazy idea. We think our workers are probably going to do better if we give them enough to eat, and they're not living in squalid conditions." They went, helped them out, put their kids into school. And they weren't doing it out of the fondness of their heart and tree hugging. They were like, "This is going to help our business." And here's the news flash. It did.
But shared value in terms of sustainability is also how things stretch on beyond where we are. So triple bottom line, planet, profit, and people. And I'm going to say some depressing things, but then to end I'm going to say some positive things. There's things I also don't understand that to me are absolutes. In terms of the planet, I'm not interested in being political, not my thing. But Cape Town, running out of water. That seems like a big deal to me. That seems kind of systemic. Is every country going to run out of water? No, I take half-an-hour-long showers and drink more bottled water than most people put together. I don't want to be hypocritical, but I'm saying when I read that, I was like, "This is kind of an end game when a country doesn't have water."
And then in terms of well-being, what's awesome is where we live today, and you read this a lot when you talk about AI and automation, the amount of disease that's taken care of. Medicine is helped. We're very in a wonderful place, longevity. But one stat that keeps me up at night is depression. The CDC [Centers for Disease Control and Prevention] and the World Health Organization talk about by 2030, remember that stat before? About the trillions of dollars we'll make, but depression by 2030 will be the leading cause of illness globally. Right now, today, and again I'm a dad, the third leading cause of death for teens on this planet right now today is suicide. And why that's a massive deal, not to be obvious, but that means someone is orchestrating the taking of their own life. It's not like they have a disease.
And so the question, and going back to values, and now trying to be positive, because I want to give you an up thing to end on. Is this Doughnut Economics, have you read this book? Anybody, Doughnut Economics? No? I've read a book that you...okay, Kate Raworth, wonderful, really...oh, thank you, what's your name?
Audience member: Mark.
Havens: Mark, give it up for Mark.
Mark: I think it's an incomplete book.
Havens: All right, Mark thinks it's an incomplete book. I'll let her know that part two has to come out. It's a sequel. It's like a Marvel thing. Anyway, so on a positive note, what we're doing with the work at IEEE is to ask the question, I want to bring it back to positive, how does the future look in 5 years, in 10 years? Not that GDP is eradicated, not that there's some kind of crazy communist, but asking the question, what does it mean when you prioritize human and environmental well-being along with money. No one's going to be like, "I'm done with money." No, money's going to stay.
But the OECD Better Life Index, anybody know about that? No? Okay, if you don't know it, go to the OECD, type in OECD Better Life Index, now the OECD is a very specific number, 35 countries in Europe, granted, but you can actually do this thing, like that values survey I showed you, this beautiful graphic. You pick a country, and you stretch different things up and down, and you can see how different well-being measures you take, and subjective well-being, objective well-being.
Bhutan Gross National Happiness, anybody heard of that? I know what you're thinking, "gross national ha...that sounds." Bhutan was inspired by a speech given by Robert Kennedy about a year before he died. A famous speech called the beyond GDP speech. And what he said, it's a beautiful speech, you can listen to it online, he said, "GDP measures all the things that are not as important as what we should be focusing on in terms of the best lives we can lead." It will measure cigarette advertising, but it doesn't measure the amount of time that we spend with our family. It will measure how many oil tankers go from point A to point B, but if that oil tanker crashes into the side of a country, and the GDP goes up, it doesn't measure the environmental degradation. And he wasn't trying to be anti-GDP per se, he was saying, what are we looking at?
And you've all studied Bretton Woods, right? Or is that just me, because again I am a dork. My wife's like, "Do you want to watch ‘American Idol'?" I'm like, "No, I'm reading this really fascinating thing about Bretton Woods." Bretton Woods, you know what it is, right? Okay, I don't mean to sound like a jerk, I know it's time for dinner, blood sugar's going down, so I'm just checking. But Bretton Woods, one thing that it's not me, there's a famous economist Marilyn Waring, with the book If Women Counted. What was not factored in at GDP at least in 1944? Care giving.
And I know we go into '45, good, that's fine. Let's have that conversation, but as a newbie to economics, I get it why it happened in 1944, women stayed home and cooked. Okay, that's the zeitgeist of the era, it's 2018. It's 2018, welcome. Women, raise your hand if you're a woman in the audience. Please go ahead. I can point to you. You know who you are.
I've got a wife and a daughter, and again I don't mean to sound flippant. That's not my point. But another woman named Riane Eisler, she escaped the Nazis in the Second World War. She has this whole book, it's praised by Desmond Tutu. She's created something called caring economics, and she points out if care giving, not just women, but men who take care of kids. And I take care of my son for like two years.
By the way, if you've ever been a stay-at-home dad in America, you walk up to the coffee shop with your kid, and everyone looks at you like, "Did you have work off today?" And I'm like, "No, I'm taking care of my son." And I made people uncomfortable. "Oh, you...you're probably out of work." I'm like, "No, this is my work. I'm a dad."
Women should be added to the GDP, care giving should be added to the GDP. Anybody? Okay, this was an event we did in the EU. This woman Maddie Devoe, she's fantastic. She's a parliamentarian, and what we're exploring is this idea of what would the world look like if policy changed, so that a CEO when she went into to talk to her shareholders at the end of every quarter, she said, "I made my fiscal numbers. And I also made my environmental numbers. And I made my societal numbers." And if she doesn't hit those numbers, she's let go.
I sound like a lunatic, do I sound crazy? Let me ask you this, how will machines know what we value, if we don't know ourselves. I don't know what to do with some of the stuff I told you about tonight. I don't know what to do with depression, if we aren't actively saying as much energy, and it's great to make money, especially to be doing the things you love. But what about depression, where our kids are taking their own lives?
Can we turn things around and say, "We got to make money, great. But can we prioritize the fact that we have saved these kids from hurting themselves? Can we prioritize the fact that if these people don't have water, we can actually use all this beautiful AI technology, and I was telling Matt, my friend before here, dating services, has anyone ever used a dating app? I haven't because I got married 200 years ago, but you enter in 500 pieces of data for most dating apps.
If we were able to do a dating service for purpose, and this is what positive psychology talks about, the term flow. Flow is doing what you love, and you do it for free. And if I was able to connect with you, and you knew that you were depressed, and I could go and help you, because I could teach you how to play guitar. That actually does increase my flow when I play guitar. And then I was able to minimize your depressions, the science of positive psychology, you can put people in an MRI machine, altruism means that both of us, our well-being increases when I help someone else. This is directed volunteerism.
And I have questions like, well, if we know how much depression costs, because depression is very costly, billions of dollars per year. What if for free, I was able through an app to find out, well, I'm doing the thing that I love that brings me purpose, and I'm going to help you. And we'll both minimize our depression together, and then we start to put that and actually say how much money can we save. Is that completely lunatic, because for me what seems lunatic is keep doing the same things and think that things will change and all that.
And I hope this has been encouraging, because I think the real opportunity we have is to say, "What brings us purpose? What are our values?" Do you know what they are? And if you do, can we build them into the machines that can transform our lives for good. Thank you very much.
I may have gone over, so the first question may be, when's dinner? And if that's the case, that's fine. A couple questions? And, again, thank you so much for being here. Yes?
Smriti Popenoe: Hi, I'm Smriti Popenoe, Dynex Capital. I'm interested in going back to the part of your conversation about controlling your own data, and everybody here has probably heard of the Do Not Call Registry, where you think you might've had control of your personal phone information, how do people get control of their personal data when it's almost impossible to even get your phone number unlisted, or delisted? What are your thoughts on just the ability for human beings just to really get control of their own data, at this point?
Havens: That's a great question. Estonia, if you get a chance, there's a great New Yorker article from December of last year, and Estonia is a wonderful model a lot of people are looking at. It's a small European country. It's not a perfect apples to apples for the States or anywhere else, but Estonian citizens own their own data, and they also have something called the E-residency program. E-residency means identity, they can prove I'm John Havens.
And then because they own their data, for instance if their doctor wants to pull their data, they get a phone call or a text saying, "We'd like to access your data, and here's what we're going to do with it." And the person can say yes or no. So in terms of the phrases I typically hear, there's so much data out there, the horse has left the barn, pick your analogy. But the point is, is that once, say there's tons of information about my behavior, and this kind of goes back to the gaydar thing. Sorry to use that term, but that's what they called it.
Remember all the information that people track about you, where you can't access it, one thing to remember is a lot of it's simply just erroneous. And this is not pejorative. The point is there's a lot of data. A lot of data sets are built on wrong data. I live in Jersey, there's probably six data sets that say I live in New York, because I used to. That's just factually incorrect.
So one thing is these models I've described, it's not about that all that data magically disappears. People start to realize, if I'm the only John Havens, and I tell you I'm in New York, and then I can give you another piece of evidence that says, well here's a New Jersey rather, and here's my latest bill from Verizon that says I live in Jersey, all that other data is now irrelevant for people who want to deal with me. And that's also why when I said our data is so precious, the models that I've explained, our data moves from commodity, where 17 cagillion algorithms from a thousand companies can tell you what I ate yesterday, but I'm the only that can tell you that I liked it. Does that make sense?
Last question of the night? Second part of the book, Doughnut Economics, no, I'm just kidding. Thanks very much.