Transcript
>> Bill: Okay, let's just jump into it. >> Hey, welcome everybody. Jeff Frick here in the home office. Welcome to 2021. I'm excited to kick off the year. I think it's my first live interview of 2021, and who better to do it with than the dean of big data? Welcoming him in from Palo Alto, Bill Schmarzo. Bill, great to see you.
>> Hey Jeff, great to be here. And it's, we're only a few blocks away. I could almost run over to your place and sit next to you if we're in our masks, to do this.
>> I know, but I got to tell you, Bill, in getting ready for this, I know we're going to talk about your new book. I went to your Amazon page, and I think I have to change my descriptor for you now as the author, you got five different books up there on Amazon, very, very impressive, congratulations.
>> And they're quite diverse set of books there too, aren't there?
>> I love it, you got a cartoon book, you've got the "Big Data MBA", you've got all kinds of stuff. So before we jump into it, I mean, I am curious from an author point of view to see your momentum, 'cause I remember, I think when you got your first book done and you were traveling a lot and you were doing it kind of during your travel. I just, I think a lot of us, I for sure kind of aspire to write a book, it's such a cool thing, and it seems like you've really kind of hit a groove swing, and I wonder if you can kind of share your experience of being an author. I mean, five books, that's legit.
>> Jeff, it's motivated by a need more than anything else. The third book, which is "The Art of Thinking Like a Data Scientist " was a workbook that I really needed in support of my "Big Data MBA" book, which is my second book. And my "Big Data MBA" was sort of my, that's my bread and butter book. That's used by larger or different universities as part of the curriculum both in the area of data science as well as in software engineering, as well as in the MBA program. So yeah, that's my bread and butter book. But we typically ran an exercise with that. And so I created "The Art of Thinking Like a Data Scientist" and self published that one so that I would have distribution and pricing control. 'Cause the minute you have some, you go to a publisher, you give up the rights for distribution and pricing.
And I wanted to have something that I could price very inexpensively that anybody could get in use. And so, Jeff, it's always been kind of need-driven. I get to a point I need to find something that I can teach with, and there's no books around that I can use, wallah! I write a book.
>> What about this one, though? (Bill laughs)
>> That one crazy.
>> I love it.
>> The other cover on is my favorite cover was a cover of Sgt. Rock. And it wasn't me flying as Super Schmarzo, it was Sgt. Rock with these quotes going to take on, Sgt. Rock kind of comic front end. But this was motivated just because I was kind of bored, and I want to do something that was kind of neat, and I wanted to do something that maybe students might enjoy, that they could pick up and it'd be a little lighter, not lighter, but it would be a little bit of a fresher perspective. But yeah, that one came up just because I felt like I had this need to take what I had and put it into a different consumption format.
>> No, I love it, I love it. And then of course, but today we're going to talk about this one. This is the new one, 2020, "The Economics of Data, Analytics, and Transformation". And the last time we spoke the topic was clearly all about valuation of data, and the often cited study that you did at UCSF talking about the Chipotle use case and really driving it from a use-case point of view first, rather than a boil the ocean, grab all the data, throw it in a data lake and pray that something comes out of it that's not green and slimy and grabbing people and dragging them back to the swamp. So, but, and I tell you, and your intro on this book is very specific that you say you want it to be a workbook, you really want it to be an actionable book. But, Bill, I got to tell you, this is some thick stuff going on in here.
So, it's real, it's real projects. So my first question is, when you get into these, and I know you used to travel all over the country and do these projects, what's the kind of the scope or what's kind of the range of scope, maybe mean median mode of a digital transformation project or one of these big data projects? 'Cause this is a lot of data that you've got to gather, there's a whole lot of buy-in that you've got to get. You've got to get really granular about your objectives. It's not an insignificant kind of easy thing to get started on.
>> Jeff, the key to this that I've learned is that there's the secret to digital transformation success, especially around leveraging data and analytics is you have to cheat. And what I mean by that is, you have to do a lot of pre-work before you ever put science to the data. And that's where I think a lot of companies really get in trouble. I can't tell how many times I've been in, talked to a customer and they'll say, "Well, we've got some data. "Can I give it to your data science team "and tell us what's important there?" And, we've talked before about how do you distinguish signal from noise in the data? Well, if you don't have a use case around which to do that, if you don't understand against which how you're going to measure success and progress, if you don't understand the decisions and the stakeholders, and the asset models, you can't make any sense of it. So data science is really powerful.
You can do things and change organizations' business models in such impactful ways, but you really have to understand where and how to point that capability in order to be most successful. And so you have to do all this work upfront, this concept of a hypothesis development canvas, which I don't think it's covered in that book, but it's covered in "The Art of Thinking Like a Data Scientist" book. I think I have a whole chapter just devoted to it. This is what we do upfront before we ever bring a data science team in. So, yes, you have to do all this pre-work, you have to get management board into it. Which is also, by the way, really hard. 'Cause management would prefer that this is an IT problem, you guys take care of it. Well, no, if you're trying to create value, IT doesn't generate value. Sales does, marketing does, product does, service and support does, consulting does.
These organization generate value, IT doesn't generate value, they support the creation of value. And so, you have to do all that legwork upfront to make sure everybody's bought in, because when you do this right, it will change game. And I will tell you, Jeff, when we do this right, it is like printing money, it's just so easy. Once you do all that homework and change the frame, you are literally use case by use case, you're improving retention by 2%, you reduce the inventory cost by 2 1/2%. All these things you're just printing money. It's just, it's unbelievable and it's just a blast.
>> It's funny, as you say so much of the work is done in the pre, in the pre-stage, and I can't help but think of any endeavor that's of value, the work is always done in the pre-work, whether it's prepping your house to paint and spending the time to patch the holes, or as we hear, I know you're a great sportsman from all the great sportsman's, championships are not won the day of the championship, they're won all those early mornings and late nights and all the sacrifices that go into it. So it's funny where this whole kind of big data meme about the nuisance of managing the data, and cleansing the data, and organizing the data, and structuring the data, as kind of this nuisance that I have to do before doing the real work. It's actually a huge part of the value proposition is that first phase before you get to the quote unquote kind of running it through the algorithm
>> The, I believe it's chapter two, talks about the value engineering framework. Which really kind of lays out all the things that you need to do before you really get into the point of be able to execute. I mean, what data sources do you think you're going to need? What predictions are you trying to make? Who are the stakeholders going to make those predictions? What are the asset models around it which you need to build the analytics? And what are the most important topics covered there? What are the causes of the false positives and false negatives? Which by the way, you can't determine that after the fact, you have to determine that upfront. So you know when is your analytic models good enough. Is it 92% good enough or do I need 99.5%? What do you need? So you're spot on, Jeff. It's all that pre-work that's required. And not only is it the pre-work, then once you've done the pre-work, we have found, and there's probably other ways to do this, but the method has worked best for me has been this use case by use-case approach. You pick a use case, you knock it down and you gather the data and analytical learnings, and then you reapply to the second use case and knock that one down. Then you regather the learnings back up and use case by use case, you're building out your data capabilities, you're building out your analytic assets, and you're printing money each use case.
>> I'm just curious, does typically, you do one of these projects, we'll just stick with the Chipotle example 'cause it's one that everybody can read in the paper and it's successful. Do you see that generally the next project is a derivation of a new application of that data that you just did all that work in that analytics, and you say, "This did great here, we can do it here." Or is it, "Wow, that looks, this worked really well here. "Here's a significant problem that we want to go address. "Now, we're going to replicate the same process." Even though it's probably different datasets in this other example.
>> So, good question. We typically find that when we launch a project, one we did at Hitachi Vantara, where I was working. We actually did a project with our chief marketing officer. And what we wanted to do is wanted to, we had a new product launch coming out, we had a new server coming out. We wanted it to be able to optimize the sales and marketing focus to go after those accounts that were most likely to buy it to understand when they were likely to buy it and understand what were the features that were most important to them. And so we worked with marketing, we did this whole upfront vision workshop, vision exercise, and we came up with 14 use cases. 14 different things that they could do to help drive and optimize marketing around new product launches. And so we did the first one, delivered $28 million of value the first year, $28 million!
>> Wow!
>> And then you do the second one, and the third one. So what happens is you find a friendly in the business, you make them a hero, and then the second and the third and the fourth use cases, they come more rapidly because you've built out a lot of the data infrastructure. Now you're just adding maybe more data, maybe not. You're adding some new analytics, maybe not. And so what ends up happening is you end up, you do one of these vision workshop exercises. You come up with eight, 10, 12 use cases and you create a roadmap that says, we're going to do these in this order and everything you learned in this first one is going to apply to the second and third and the fourth one. And by that time, of course, the rest of the company has heard about your printing money, and they all want in, which by the way becomes another kind of a challenge. Because there's a real desire all of a sudden to dive in and do a lot of siloed projects. You want to make certain that every project you're doing is building on each other.
One of the lines in the book is, in knowledge-based industries, the economies of learning are more powerful than economies of scale. And the ability to learn and reapply that learning across these use cases, and this is how you really accelerate the creation of value. So you really need a strong governance process to pick out what use cases are we going to do first, how are we going to enforce the reuse and governance. Don't people run off and do siloed IT projects, or orphaned analytic projects and things like that. You have to have a discipline around this asset of data and analytics. And if you do that, this is really easy. It is easy if, and it very productive and very fruitful as well.
>> Well, how long are those types of projects? Are they weeks, months, obviously not years, this whole topic hasn't been around that long?
>> No, so the first project will take probably, depending on the organization, anywhere from six to nine months. (coughs) Excuse me.
>> It's a commitment. But that's a foundational investment in your capabilities and your data cleansing, and your data-
>> The second one will take five to six months. And it'll take two to three months, and then you'll start doing two simultaneously, and then three simultaneous, you'll have built out not only the data and analytics foundation, but now most importantly you've built the organizational discipline around how to do this. This is where organizations fail, Jeff. It isn't in the technology, it isn't in the dirty data, it's they lack the discipline to continue to follow a process, and they end up wanting to kill the goose that laid the golden egg by accelerating beyond this. No, stay on target, keep following this process.
And if you do that use case by use case, and develop that organizational fortitude and muscle, then you've got this thing beat and it's very straightforward. But to need to have a strong governance process, and the governance process has to have teeth. One of the things I recommend to our clients is, you own the data. If you're the chief data officer, you own the data, you own the analytics and you own the governance process. You own the process and you can, you're going to piss people off at times because you're not going to pick their projects first and there's all kinds of things that you can do to help alleviate people who are kind of upset. But you need to have one person who can sit on top and enforce and drive this thing. And it can't be some lowly person in the IT organization, it's got to sit up on the executive in a CXO, it's got to report to the CEO and be instrumental because data is this modern day asset.
>> Can it even work if you don't have that top level support?
>> No, I just, which is why I think it's rare to see this be successful in really large organizations. Data silos today are less a technology issue and more of a compensation issue.
>> Compensation issue?
>> How people get paid, how they make money, and willing to share data across lines. My favorite example is in working with a financial services organization, they wanted to not only create a calculation of the customer lifetime value, but they wanted to predict how much they could get out of their customer. And there's ways you can do that, it's pretty cool. But if you don't have a holistic view of the customer, you're going to make all kinds of bad decisions. And for example, it's really hard to get the wealth organization and the small business organization to share their data with the checking, the credit card, the mortgage, the loans. And so you have this jaded view. And one example I thought was great was, because you had the separation between small business wealth management and the rest of the company, the company made a decision based on the other transaction, checking and savings and such to cut this one person off.
This one person they cut off ended up being a small business person who was funding their business through their credit card, had a lot of wealth and was building a lot more wealth, but because you didn't have the holistic view of that customer, you made a bad decision. So again, these data silos kill organizations. And in big organizations, data silos are political wars.
>> That's really interesting. It's such a different time. And the people that haven't figured that out yet are so far behind where it used to be, holding information when there was limited information and a lot of asymmetrical access to information that was a legitimate source of power.
>> Yes.
>> But that's not the way anymore, now there's more information than we know what to do with, be a trusted conduit of good information, not the person that tries to hold the information, and it's going to go around you.
>> And the economic value of data is only realized when you can share and reuse that data across an unlimited number of use cases. Like that the economic multiplier effect, that's what drives us. So you are spot on, Jeff, it is the exact reverse. We're hoarding data at one time was how you maintain power, sharing data now is how you create value.
>> It's so funny that people still haven't figured that out in this time. I want to shift gears a little bit, Bill. 'Cause you talk a lot about the laws of digital transformation, which is great, 'cause everyone in their cousin talks about digital transformation all day and twice on Sunday, but you actually kind of jumped through and do some definitions. So I want to highlight a couple of things that you pointed out. One of them, law number one. Digital transformation law number one is about reinventing and innovating business models, not just optimizing existing business process. How often does that come up in these workshops that we're not simply trying to do better and make better widgets faster, cheaper, stronger, but we're actually trying to redefine our business model.
>> So, here's one thing that can help was a hidden trick technique we use when we work with companies in that space. We create customer journey maps and service design templates. So these are design thinking tools. And you think about a customer journey map, for example. What a customer journey map does is it measures a customer who's tried to do something, let's say, like go on vacation. And it starts by identifying the epiphany of when they decide they're going to go on vacation, and then all the way through to the time when the vacation is over and they're having sort of an afterglow moment. If you're in the theme park business and that customer had an epiphany, they want to go on vacation, that's the point where you want to get involved. You don't want them to figure out and to say, well, they're going to decide between, "Am I going to go to a theme park? Am I going to go to a resort?
Am I going to go to a sporting event? Or what am I going to do?" And so what happens by forcing customers to think, companies to think about their customer journey maps, it gives them a chance to first off understand all the key decisions that customers trying to make along that decision path, that's number one. Number two is to identify the impediments to those decisions. And the third thing you've got then is once you identify that you can think about how do you reinvent that path. And my favorite example is how I order my favorite cereal, Captain Crunch. It used to be when-
>> With or without crunch berries.
>> I like it without crunch. (screen buzzing) I'm an old school, I like it just straight forward. Pure captain-
>> I want to make sure we get that cleared up.
>> The pure Captain Crunch. So, what happens today is, I got my Captain Crunch. I go to the cabinet, pull my box of Captain Crunch. And when the kids are home, I shake it and it's usually like crumbs. They don't bother to throw the box away, they just leave it there. So now what happens? I get my box a shake, there's no Captain Crunch in it. What do I got to do? I got to get dressed, I got to find my car keys, I got to get in the car, I got to drive down to Safeway. I got to go park the car, standing in line, go to the back, grab the Captain Crunch, stand in the express line. There's always somebody in express line ahead of me who's got 15 items and not 10, and so et cetera. So that entire journey... I get home and I finally had Captain Crunch. That entire journey is negative, negative value creation, until the point where I'm actually eating Captain Crunch.
Everything else I have to go through to get Captain Crunch, go to the store, park, buy it, pay for it, stand in line, come home, it's all negative. How does Amazon solve that problem? Well, in the future what's going to happen is I'm going to lean back and say, "Amazon, Alexa, "two boxes of Captain Crunch, Pronto, please." And within 30 minutes or so we're going to have a drone come up and drop that thing off. We're going to take that entire process. we're not going to optimize the process, we're not going to make it easier. The drone is not going to come and grab me and drag me to the store and made me buy it, they're going to bring it to me. All those steps that are negative value creation, Amazon just wiped them out. And that's the key I think is that organizations in order to reinvent their business models need to understand the customer journey maps, need to understand where the sources of value creation are, and where the hindrance of value creation is, and then reinvent everything based on having superior knowledge about your customers, your product consumption and the operation.
>> I'm curious when you get into the bowels of these projects, how well do people really understand the value creation that they're providing their customers? I mean, obviously some do, but is it as pervasive as it should be? And are people rallying around? And do they know that this particular piece of the equation has a lot of value to the customer and this one maybe doesn't regardless of the effort or the pain difficulty cost to deliver those pieces?
>> There's a handful of companies that really do understand and are very customer-centric. Most companies aren't. The vast majority of companies have what I would call an inside out view of the world. They look at themselves whether they provide product and services, and then try to find customers who match what they think they need, instead of an outside in which is understanding what customers are trying to do and then creating products and services to meet those needs. It's, we are a product-centric world. What's unfortunate is the only person that really counts in the entire value creation process is the customer. They're the ones with ink in their pen, they're the ones who are paying for things. And so we really don't do it. And time and time again, you'll have customers, companies say like, "Oh, we use design thinking." I'll say, "It's okay, sure.
"Show me your customer journey map crickets." "Well, we haven't done." You're not doing it. If you don't have a customer journey map, you are not doing design thinking, I'm sorry. So, I think it's one of those things where the vast majority of companies haven't figured this thing out, but once they do, it's interesting, Jeff, here is the most important people in the company change. What happens when you do a customer journey map? It's now the people at the front line who are the most important people in the company, it's not the vice president of marketing and the vice president of sales, I mean they don't know shit, what goes on It's the sales person, it's the barista, it's the technician, it's the engineer. Those are the people who every day are part of that value creation process or witnessing the value hindrance process. And so there's this dramatic change in the organization where the power comes guessing move from the top down to the bottom. And I can tell you right now, they're in a lot of companies, a lot of executives who want to give up the power they have at the top.
>> I know it's so funny and one of the interviews I did with Darren Murph that I reference all the time and he talks about the change in leadership. We hear about the kind of servant leaders, but his thing is more be an inhibitor remover, be a blocker remover, get shit out of the way for your people so they can be more effective at getting their job, 'cause you've got the authority, you've got the call, you've got the the rank to help basically remove obstacles. It's a very different way of thinking about leadership.
>> It's on my team. I told my team that if you ever come to me for a decision that you already know how to make, I will be mad at you. If you know what decision to make, make it. You're closer to the situation, you're closer to the problem, you're closer to the customer. You make the decision. My only requirements are be intelligent and learn. I mean, manage the risk of your decision, make sure you thought about the risk of it going right and not going right. But whatever you do, whether it goes right or not, make sure you're learning from it.
>> I just want to follow up on another great example with Amazon. And the magic of Amazon is just the execution machine that Jeff and Andy, through AWS and the rest of the team have built, 'cause they just execute well. But it's a funny story on Amazon Go. So the objective of Amazon Go is that you could walk in, grab something and leave. And apparently, and I haven't heard this validated, it's hearsay, but when they first started and they had a moderately full assortment of goods from a regular grocery store, and they just couldn't do it 'cause there was too many pinch points in the process. When you're getting your slices of pastrami in the back, or you're waiting for the cake maker to put happy birthday on top of the cake. And so they finally figured out, what are we optimizing for? If our objective for this thing called Amazon Go is for people to walk in and walk out in a very short period of time and not have to go through a cash register, they basically had to restructure the assortment for the store to support the objectives of what they're trying to do.
So now it's all pre-packaged, it's sandwiches and chips and drinks, and I'm sure there's healthy stuff too. But it's really funny, they stayed true to their primary objective. If that meant redefining what that particular type of store is. It's a great story. And if you've, I'm sure you've been there, you walk in, you grab your thing, you're out, it's crazy. It's really weird.
>> It's really, it's a fanatical focus on the customer. Not just the customer experience, which is important, but it's understanding what decisions are the customers trying to make and how do I help them make those decisions more easily. What food to grab, I mean, how you set the store up, put in the sandwiches and the chips right beside each other, all this stuff. And no one's probably better than Amazon as far as understanding customer and consumer buying patterns so that they could structure the store in a way that makes it more efficient for somebody to go and get what they want and get out.
>> So I want to look at another one of your digital transformation laws number four.
>> Number four.
>> Which is, digital transformation is about creating new digital assets and leverage customer product to drive granular decisions, and this is the important part, for hyper individualized prescriptive recommendations. So this whole kind of concept of hyper customized. And it's funny, the first time you ever realized that if I look over your shoulder at your Facebook and you look over the shoulder at mine, we're not seeing the same thing. And even more, if you do a Google search on the Palo Alto High School, and I do the same thing, we're going to see different results. So, this recognition that there's an expectation that you're going to deliver to me the stuff that's relevant to me, very little difficult challenge for a big organization.
>> It is. And by the way, it's based on economics. I'm writing a couple of blogs right now on this concept around nano economics. Which is about, nano economics is about the individual human and device propensities and how to leverage it. So you have different propensities and desires in life. You like, and you got that cool scooter bike thing you ride around on, and you've always had the latest gadgets. And now you have different interests and passions. And it's about, there you go, basketball and hiking and things like that. So, it's about understanding those propensities, those predicted propensity, what people are going to want to do and then using that to optimize around the business and the customer experience. So we've got macro economics, we've got micro economics, and now we have this world of nano economics which is really about the codifying these propensities so you can derive and drive new sources of customer product and operational value.
And that is really hard, because to do it you have to use technology. If you've got 50 million customers, there's no way anybody in marketing could create an individual marketing campaign for every one of those customers. So you have to rely on technology if it will scale it. So it requires you to, if you're going to scale that, your ability to codify these propensity so you can operationalize them, become really critical. To me, Jeff, it's the biggest difference between what we've taught in the world of BI and what we've taught now in the world of data science. Where BI is a world that works on averages. On average, people are going to buy this, data science is about propensities. You're most likely to want to buy this, I'm most likely to want to buy that.
>> So, Bill, I want to shift gears a little bit on a concept that you brought up. And I'll bring up this little, this picture. This is actually a Gartner picture, but when we talk about analytics, it's descriptive, diagnostic, predictive, and then ultimately prescriptive. And I saw this other version, which has a slightly different version where they add cognitive. But where you go, I think is a really interesting. And you take that next step to be autonomous. And there's so much talk about autonomous vehicles. I think Oracle introduced their autonomous database. But this whole concept to move beyond prescriptive to really kind of self-organizing, self-regulating, self fixing and doing it by itself, pretty interesting concept. And you're the first person that I've ever heard kind of take that typical scale that we see all the time and then tag autonomous to the top right corner.
>> Well, thank you. Yeah, it's it, I think it is the ultimate where we want to go. And it is not just products, we can create autonomous processes too. And you talked about Oracle and their autonomous data management system. And we can create processes. And it's sort of the ultimate of where we want to go, which is autonomy is about creating these assets or products or processes that are continuously learning and adapting through every iteration. And for me personally, the aha moment for me was the quote from Elon Musk when he said that he believed that when you buy a Tesla, you're buying an asset that appreciates in value, not depreciates in value.
>> The thing that's really profound and what I'll be emphasizing at the sort of what, that investor day that we're having focused on autonomy, is that the cars currently being produced, with the hardware currently being produced, is capable of full self-driving.
>> But capable is an interesting word because...
>> The hardware is.
>> Yeah, the hardware.
>> And as we refine the software, the capabilities will increase dramatically, and then the reliability will increase dramatically, and then it will receive regulatory approval. So essentially buying a car today is an investment in the future, you're essentially buying a car, you're buying... I think the most profound thing is that if you buy a Tesla today, I believe you are buying an appreciating asset, not a depreciating asset.
>> Now when he said that. I know there was a lot of people who sort of gave him shit about it. Like, "Oh yeah, you're going to buy a Tesla, stick it in your garage, and 20 years later, it's going to be like a 1960 to 4 1/2 year Ford Mustang, it's going to be worth a bunch of money." No, no. What he was saying was that the more that car gets used, it's going to get smarter. And across the million Teslas out there, any one experience that one Tesla has and learned from, it ships back up to the Tesla cloud, it aggregates it and back propagates it back so that every car has learned what that one car has learned. So to me that is such a powerful game-changing concept that you can leverage AI deep-learning and reinforcement learning to build these assets that are continuously learning and getting smarter through every interaction.
>> There's no fatalities a silver lining, but the upside about a fatality in the self-driving world is that, in the human world we're used to, when somebody crashes the car, they learned a valuable lesson and maybe the people around them learned a valuable lesson. "I'm going to be more careful. I'm not going to have that drink." When an autonomous car gets involved in any kind of an accident, tremendous number of cars learn the lesson. So it's a fleet learning, and that lesson is not just shared amongst one car, it might be all Teslas or all Ubers. But something this serious and of this magnitude, those lessons are shared throughout the industry. And so this extremely terrible event is something that actually will drive an improvement in performance throughout the industry.
>> Well he's got another angle on that too. Which is that your Tesla doesn't sit in your garage ever. That as soon as it drops you off, wherever you're going, whether that's work, home or the grocery store to get your Captain Crunch, it goes to work. It goes, starts doing ride sharing by itself and that becomes this much more productive asset that's not sitting around 95% of the time just collecting dust and getting dirty.
>> It's an amazing concept. And I had a lot of a very small number of customers have had some really interesting engagements about how they apply that concept to within their operations. How do you self-monitor, self-diagnose, self-heal, and continuously learn? So I do think that's a big part of what I talk about in the book, and I think it's also a big part of where I think leading edge companies are going to go and driving this analytics maturity.
>> I'm just looking at your economics of artificial intelligence, chapter six. And you talk about getting really granular in your data and diverse in your data and lots of different data sets. And you outlined a number of different kind of artificial intelligence models in theorems. But the part that struck me from earlier in the book though is that you say that most of the problems most of the time are passive aggressive people getting in the way. It's not a technology process, it's not a data problem, you can sort through that. It's not even necessarily defining a clean ROI, again back to the Chipotle incidents, whether it's number of items per ticket, and how many people add Avocado? Whatever it is, but that's not the issue. So I'm just wondering is your next book going to be how to manage up?
>> I've thought about a book on change management. And how do you... To me the chapter in the book that I felt most excited about was actually chapter nine, the chapter about team empowerment. When I started the book, I wasn't going there, that chapter didn't even exist. And until I got to some point and realized that unless you empower the team, which means you have to address this passive aggressive behavior, you have to make sure you've got an organization alignment, that you have to build the ability across the organization to transition the organization from settling on the least-worst option, to synergizing on the best-best option. And a lot of that is, no, there's the passive aggressive behavior.
Which when you do these vision workshops, for example. One of the things that we mandate whenever we used to do, and we still do these things. Is that every one of the key, so we go through a stakeholder map. Who are all the key stakeholders? And we make sure every stakeholder is in that workshop. If somebody can't make it, we reschedule. Because you don't want somebody to walk out of there who, first of their voice didn't get a chance to get heard, So they felt like they'd been ignored. They're going to get you. So you need to bring all these people in. And then as you go through this process, they need to understand that, yes, we're going to take a use case by use case approach. And you may be use case number 14, I'm sorry. But the advantage for you is by telling you how to use case 14, we're going to have all these assets prebuilt. What time you get to your use case is going to happen fast and it's going to be high quality, it's going to be really spot on.
>> So let's tell this, you've got your empowerment one through five, your great little lessons, I love this book though. At number one is really getting everyone to buy into the organization's mission statement, which is, again, seems dropped dead dumb, but most people don't take this stuff seriously. At number two, I love speak the language of the customer, which not only so you understand the customer, but to normalize everybody on the room to make sure you're speaking the same language. I love that. Which would beg the question that's probably not been the case on a few of your projects.
>> It's very much so. Yeah, everybody has their own definitions. The only one, the only definition, the only terminology that matters are the terminology customer speaks.
>> And then your next one is to flex the team, be malleable to the goals of the project. Not some stupid hierarchy that was defined 200 years ago based on an old organizational structure before there were telephones, love it. The and mentality. Embrace different perspectives, blend and break apart ideas to synergize something more powerful and empowering. Again, it just goes back to leadership. None of this stuff happens without progressive leadership, without comfortable leadership, without a leader that is not afraid to hand over the reigns and hand over some power.
>> Jeff, that's the key point. You need to have leadership that is confident and comfortable with their willingness to give up control. If the old command and control structures are dying. And we know that in sports. It doesn't, command and control structures don't work in sports. Andrew played football. Was the football coach yelling, "Andrew go here, Nick, go there, Brian run there." No, you train the team, you empower the team, and then you gave them the charter. And all those lessons by the way, in that book, you can apply to any sports team. And then what are the couple of things in there is the idea wrong organizational improvisation. Is that you have this ability for the team to morph based on the what's the charter or the job at hand. And that means that at some point time everybody needs to lead.
Everybody needs to be prepared to lead. Michael Jordan May have been the greatest basket player of all time. But John Paxton and Steve Kerr took winning shots and championship games to win because it was their time to lead. Everybody was on Michael and you're open shoot the rock. So how you create teams and empowerment. I think we're going to see organizations move away from putting people in boxes and instead put people into swirls, so they can move across box. You can create a team like a good jazz quartet. They come together, they play, trouble player goes out, a new one comes in, you riff a different style, a new, you got to have that ability to have that organizational improv. We know from sports, that's how successful teams are built. If you have to get in the business world. And the first thing that happens and boy, over drinks, we could certainly talk about this. Is they want to put everybody into a box and by golly, don't leave your frigging box.
>> That's funny. I just did a blog post a couple of weeks back. Actually one of the first ones we've done in this context with Marcia Conner from the 2013 IBM interview. And she was so funny cause she, it's sad and funny, but she said, "We hire great people "because they have all these attributes, "and we interview them, "they're cool, and they have their life experience "and their context and they're going to bring "this unique perspective to our team "that we can then apply to our own problems." And to your point, as soon as they get in, you hand them the HR. The HR book and you stick them in a box. And it's this really funny thing that happens pretty consistently.
>> So I mean, I'm going to veer off here for a second. 'Cause I think it highlights one of the biggest challenges as AI becomes more dominant in our world. I believe that AI is going to force humans to become more human. And when I say that, what I mean is that you think about when we were young, our kids are young, and we were young, we're full of curiosity. We take things apart and try to put them back together again, we're always exploring and getting our hands dirty. We have this natural curiosity which leads to creativity. But as soon as they get in school, we start driving that curiosity out of people. Standardized testing, and everybody's got the right same curriculum, et cetera. It's worse in college where you've got to pass the SAT, We have these various standardized tests. And then we get in the business world and we have these boxes we sit in.
We do everything in our process from education through work to drive creativity and curiosity out of our people. AI is going to change that. The organizations that are going to win are the ones that realize, "Oh my God, what I need to do "is to undo all that damage I've done to my workers "who've had natural curiosity destroyed." The best data scientists are the ones who are curious about, "Well, I wonder what that data source might do." With the same that, data science is about identifying those variables and metrics that might be better predictors of performance, because if you don't have enough might moments, you'll never have any breakthrough moments. And so I think along this line is we're going to see culturally, a huge transformation. Our kids are going to benefit from it, we're not, we're stuck in the boxes of their regime.
But our kids are going to benefit because there's going to be people realize, "Wait a second we got to bring back that curiosity to explore different things, especially around the language of the customers. And then we got to be able to create new things, creativity, and then that will drive innovation. So our kids are going to have a much different world than we have because they're going to be empowered to do things because AI is going to take care of all the bullshit stuff.
>> Well, hopefully, but I mean, there's a counter to that which is the whole explainable AI, which is a really another interesting topic that we could probably spend an hour on, which, as the math and these algorithms is complicated, data scientists are smart people. And then the inputs come in and then the models train and learn and adjust over time. If anyone ever wants to unpack to know why this answer came out of this process, especially if there's concern about ethics or there's concern about privacy or whatever the concern, is is that even possible? And again, let's think about not this year or next year, which is still relatively early days except for recommendation engines and those things. But 10 years down the road, we'll you even be able to start to get in and unpack what the heck is going on inside this thing that told me that I should do this or recommended this, or took me down a street in my car that I chose to go down.
>> So there are techniques out there, excuse me for explainable AI. SHAP, SJP has got some techniques around how that we are doing explainable AI so that we can start taking these models apart and to sort of identify which variables had the biggest impact on the decision. But let me push you one step further. This is a really good topic, Jeff, that is explainable AI isn't enough. We need to have ethical AI. AI models today, the way they're built suffer from confirmation bias. They keep feeding you because they use collaborative filtering type recommendation engines. They keep feeding you the same stuff over and over again. So if you like the Chicago cubs, you're going to get lots of information about Chicago cubs. (coughs) And these models suffer from confirmation bias. And so in order to battle confirmation bias what we need to do is to ensure that we have a way to identify and measure the AI model false positives and false negatives.
So we we need to understand when the model got the result wrong so that we can feed that back in the model. Now, false positives is actually pretty easy. So a false positive is, you somebody you think they're going to be great, and they flake out. You give somebody a loan, you think they're going to repay and they don't repay. Those are the things and that didn't happen. And you can take those false positives and feed them back into your AI model so that they don't suffer from confirmation bias. But the real key is the false negatives, the person you didn't give a loan to.
>> Well, how do you clear those out of there? How do you pull that out there?
>> So there's a couple of ways, two ways that I know of hot topic, I've written blogs on it right now. One way is you actually go through and you follow up with the people you didn't hire. You follow the people that you didn't give a loan to and find out what happened. They got a loan from somebody else, that might be harder to do, but they went to, there was a person you wanted to hire, you didn't hire them, your model said, reject them. You find out where did they went and how did they do. A more powerful way though, I believe, is that you allow humans to look at the model and make the ultimate decision. So let's say that somebody comes up to him for a loan and that person's got a really bad loan history, and in the system says, "Reject him." But they talk to somebody and the person says, "Well, I've gone through hard things. I'm changing my life.
I'm trying to get a loan to buy a car so I'm coming Uber driver." I can become a door dash and things like that. That person all of a sudden, that loan makes a lot of sense. That person's going to be able to pay that loan back because they're going to use their car to make more money. So the ability to have humans look at the results but make their own decisions as far as whether to give that loan, whether to higher that person, whether to give that student a scholarship. You have to allow humans to be able to look at that and override, but you need to understand when the human makes a decision to override what was their rationale? Because if it ends up being the right, you want to learn from that. So you've got sure false positives and false negatives. You create this environment so that you can combat confirmation bias.
>> Tough to codify, though, tough to codify. I got a feeling, especially if maybe there's no line item in the algorithm for career changes, or guess what? California changed the law now, they can, the Uber drivers. But even then it still goes back to your Powerment, which is still back to chapter nine, we've come full circle about giving people the power to actually do something against this great new tool that the company just rolled out that's going to increase the accuracy of our credit scoring before we choose whether to give somebody a loan or not.
>> You have to empower the team. And again, what you want to do is you want them to manage your risk, so they're not making million dollar loans at somebody who's got no chance of paying that. (coughs) Excuse me. But you want to capture the rationale. So this person, the rationale might be, this person's making a career change. They want to buy a car, which they're going to use go to Uber. I decided to do that. And if they, this person repays the loan, all of a sudden your new model has a line item for career changes. Maybe somebody is going to buy a car and become an Uber driver or whatever, gig economy, that's a whole different, you've now added a new variable to your AI model and it's just now gotten more effective.
>> What about, do you ever see where people increase the number of false positives on purpose so that they're basically not cutting off too much? The old newspaper boy model. You don't want to go home with no newspapers. If you sell a newspaper for a buck and they cost you a quarter, you don't want to sell out. You just missed out 75 cents. You'd rather go home with one newspaper too many than be one newspaper too short. So I wonder if anyone kind of tweaks it to kind of open up the bottom. So maybe I can loosen that algorithm so I get those people. So now maybe I discover a whole new category of customers.
>> Again, this idea, when I was at Yahoo we had built an advertiser analytics product, and it would deliver recommendations to the media planners and buyers and the campaign managers. And it gave them three options. They could accept the recommendation, and we'd measure how effective that was. They could reject it, and we measured how effective that rejection was, or they could change it with rationale and we can measure how effective that was. We always gave the humans control. And what we found is that because the human was in control, they weren't scared of AI anymore. AI wasn't, they weren't working for AI, AI was working for them. So it gets back to this point of you got to empower the frontline people. You got to make sure they feel comfortable that they can make decisions and be wrong and not be punished as long as they learn.
I mean, the only way we ever learn is through failure. We got to embrace failure as a way to learn. So you give a few loans to people who shouldn't get loans, but you learn from it, but maybe one or two of those people you gave loans to they pay them off and your model just became more effective. And by doing that, by the way, I will argue by doing that you ultimately increase your total addressable market.
>> That's what I mean. 'Cause if you're too tight, if you're too tight, you're missing opportunities. And some of those opportunities won't be positive but the net-net should be positive. And it's funny, your last little blurb on chapter nine was own your mistakes
>> Yes.
>> and you'll own your future. I love that. So Bill, before I let you go, as you continue on this journey and you're getting your word out and there's a lot of dense material. What, again I'm going to answer my own question, but, what should people be cautious of? What should they be learning? How can they be successful besides the obvious pick an easy project and get top level support? Where are the landmines that bite most people that they just missed?
>> The not doing the upfront homework is a killer. Not identifying all the stakeholders. The people who either impact are impacted by the business initiative that you're targeting. The use case you're targeting is probably the biggest flaw. You don't identify everybody, you don't bring them in, they feel left off, passive-aggressive behavior gets you. It also misses the opportunity for our very diverse set of perspectives and a rationale from those people. Even your AI models are really better when they're optimizing across a wide variety of sometimes conflicting measures. Optimize it across two or three measures, who cares? And when you get to 100 and 200, that the model really, that's when the AI really starts to stress itself, that's where it flexes it's muscle. So by not understanding everybody who's involved in that and bringing them in, not only do you create a situation that's likely going to lead to passive or aggressive behavior, but your AI model is going to be incomplete because you didn't consider the full gamut of what people want to do, and your AI utility function will be insufficient.
>> Well, and I can imagine to kind of the classic consulting story, is that if somebody pays money to bring you in to help, and you're a paid facilitator and you're an expert in the field and you kind of shepherd this process all along, can it be done without Bill? And, or more importantly, even if I bring Bill in as a catalyst to help me get started, what's the key then to pass the button so that Bill can leave and I can see ongoing success within my own internal teams.
>> So the one thing I would say is don't build, don't put people in boxes, put them in swirls. You got to empower people. This organizational improv is very powerful. The ability to move people around, everybody needs ready to lead, everybody has to have a common language, you've got to have a common objectives. That's what, the senior management can set the objectives and help define the language, but ultimately it's the feet in the street that have to do it. It's the lesson from Admiral Nelson, which is covered in the book about how he had an inferior Naval force compared to the French, Spanish Armada, but still defeated him. Why? Because he had empowered each of his captains that when they got engaged they can make their own decisions. This French Spanish Armada had one person at the top who was making decisions of plagues, tell all the ships, where to go. But when they got into that close hand-to-hand combat or boat-to-boat combat, the captains of each of those ships were in much better position to battle. And even though Lord Nelson got shot and killed, his troops are still able to inflict a credibly stunning defeat on this French Spanish Armada.
>> But do you find, when you leave though is there like an AI champion that kind of becomes the person that crosses boundaries as the champion? Or is it something where, kind of the rank and file can get enough of the feel. Again, kind of post that post a successful engagement that they can see the opportunities and kind of drive some of this change at a lower level.
>> So I believe every organization should have what I call a chief data monetization officer who owns the governance of, and the champion of the data and analytics assets. To make certain that the organization has both a carrot and stick so that you don't have Renegade projects went off where people are out there building orphaned analytics. And by the way, if there's any one problem that I see most consistently across organizations is these orphaned analytics. One-off analytics built for a one-off purpose that are never reused again. And then the guy, Bob, who wrote the thing he inspires her. And then that model sits there and it slowly decays over time, and no one knows how to fix it. So to me, what organizations need is this chief data monetization officer who is responsible for driving the use and reuse of these assets across the organization.
And that person, by the way, just isn't a technologist. They've got to also, I mean, maybe it's a chief innovation officer. Like what I was doing before where they are not only innovating with data and analytics, but they're also own the innovation of the people, empowering the people so that they understand that the AI works for them and not vice versa.
>> Alright, Bill. Well, I think we'll leave it at that. So go out and get the book "The Economics of Data Analytics and Digital Transformation", anywhere books are sold, I love that line. And congratulations. I mean, you are prolific and kicking these things out. And when I first dug into it, I mean, it is dense. There is a ton, a ton, a ton of great information here. So, go out and get the book. Always good to catch up with the dean of big data. But I don't know, maybe you're going to be the dean of psychology. I think this is a more of a human factors problem than a data problem, it always goes back to.
>> It very well could be my next book, Jeff, could all be about human factor.
>> Well, Bill, great to catch up, great to see you, and we'll see you around town.
>> Thanks, Jeff.
>> Thank you. This Bill and Jeff. You're watching me, Jeff Frick, from the home studio. Great to see you, we'll catch you next time on "Turn the Lens". I got to figure out what my clothes is going to be.