The Digital Village Show

AI: Game On // Tim Trumper

July 01, 2024 Digital Village
AI: Game On // Tim Trumper
The Digital Village Show
More Info
The Digital Village Show
AI: Game On // Tim Trumper
Jul 01, 2024
Digital Village

Tim Trumper, NRMA Chair, Advisor to Quantium, Advisor to Andi Search and of Author AI: Game On ," takes us on a journey into the world of AI, data analytics, and deep neural networks, revealing their profound impact on modern businesses. Learn why embracing AI and data is essential in today's competitive landscape and how it can revolutionise customer experiences and drive innovation. Tim shares valuable insights on navigating the evolving technological landscape and offers practical strategies for integrating AI to create value and stay ahead.

Guest:

Tim is a distinguished leader in the field of data analytics and artificial intelligence, with a career spanning over three decades. Tim's extensive experience includes roles as CEO, non-executive director, and advisor to numerous high-profile global and Australian companies. His recent book, "AI Game," delves into the transformative power of AI and its critical role in modern business strategy.

Hosts:

Paul Scott
: www.linkedin.com/in/paulmscott
Jason Hardie: www.linkedin.com/in/jason-hardie

The Digital Village Show is presented by Digital Village, a network of over 300 digital specialists that collaborate to solve business technology problems for organisations of all sizes.

www.digitalvillage.network

Show Notes Transcript Chapter Markers

Tim Trumper, NRMA Chair, Advisor to Quantium, Advisor to Andi Search and of Author AI: Game On ," takes us on a journey into the world of AI, data analytics, and deep neural networks, revealing their profound impact on modern businesses. Learn why embracing AI and data is essential in today's competitive landscape and how it can revolutionise customer experiences and drive innovation. Tim shares valuable insights on navigating the evolving technological landscape and offers practical strategies for integrating AI to create value and stay ahead.

Guest:

Tim is a distinguished leader in the field of data analytics and artificial intelligence, with a career spanning over three decades. Tim's extensive experience includes roles as CEO, non-executive director, and advisor to numerous high-profile global and Australian companies. His recent book, "AI Game," delves into the transformative power of AI and its critical role in modern business strategy.

Hosts:

Paul Scott
: www.linkedin.com/in/paulmscott
Jason Hardie: www.linkedin.com/in/jason-hardie

The Digital Village Show is presented by Digital Village, a network of over 300 digital specialists that collaborate to solve business technology problems for organisations of all sizes.

www.digitalvillage.network

0:00:06 - (Paul): If you've been dismissing AI, applied analytics, machine learning and deep neural networks as not applicable to your business, you'll soon discover your perspective is out of date. And if you've skimmed over those developments as being too hard, you need to know that embracing the world of data and AI is seen simply not optional. So for those who harbour doubts, I suggest you suspend your disbelief and be prepared to accept that what was once science fiction is now fact.

0:00:40 - (Paul): That's a quote from a book that's just recently been published called AI Game, all written by our guest today, Tim Trumper. Good afternoon, Tim.

0:00:52 - (Tim): Thanks, Paul. Nice to be here.

0:00:54 - (Paul): And I've got Jason here with me as well.

0:00:57 - (Jason): Good to see you again, Paul. Nice to be here with you, Tim.

0:01:00 - (Tim): Thank you, Jason.

0:01:01 - (Paul): Great to have you here, Tim and Jason. And I think it's number 27 in terms of our podcast for Digital Village. So really interesting book, Tim, and we're going to go into some detail on it in a second. But there was another quote that I wanted to pull out as well, which really caught my eye. This is a quote from one of your own customers, I believe, from a while back, who said you've had a front row seat at the forefront of data use, AI enablement and data ethics over three decades, so why not tell us some of what you've learned? And that was really the trigger for the book and actually got you put pen to paper, as it were.

0:01:45 - (Tim): That and COVID lockdown. Yeah, no, it was. I've had, you know, I make the point in the book, there's, people say there's never a good time to start a business, start a family or write a book because all those things are hard. And actually writing the book was extremely hard for me, but it was my first book. But I have been delighted with the feedback so far.

0:02:09 - (Paul): And look, I think for those of those people who don't know you, little bit of background on yourself. So you're chairman of the NRMA, an advisor and founding shareholder in the data analytics firm Quantium, which I think is a world class analytics business which provides services to some of the world's leading brands. You've been CEO, non exec, director and advisor to a number of high profile global and australian companies.

0:02:39 - (Paul): You're a graduate of AICD. I've completed a Harvard corporate governance program and hold an MBA from Une. So what your cv doesn't reveal, of course, Tim, is that you're also a mean drummer in a rock band. Do you want to just briefly tell us about that, too?

0:02:58 - (Tim): Well, thanks for bringing that to everyone's attention, Paul. Very appreciative. So, you know, I've been playing in a band since I was 15 with a bunch of guys, and we still play to this day. And it's been a great joy of my life, actually. Playing music, I think, with friends is not a lot, much more fun than that.

0:03:16 - (Paul): No. Great stuff. So, Jace, do you want to kick things off? I know that you've got a ton of questions to ask Tim, and I think what we're going to try and do over the next hour is just take a journey through some of the key points that Tim's covering in the book. So off you go.

0:03:33 - (Jason): Sure. Yeah. So I feel very excited to be chatting with you today because there's obviously a lot of discussion and conversation around AI in relation to business transformation over the last few years. And so, speaking from the perspective, or maybe for the perspective of business owners and directors and board members, you're an incredibly insightful and experienced person to speak to. Having such experience in that space and in AI, as well as the technology and the points that I really took from the book around data centricity and customer centricity are massive takeaways that I think would be beneficial to labor.

0:04:30 - (Jason): And I am in particular interested in maybe kicking things off with a bit of an exploration or an imagination. I don't know if I should put you on the spot with this, but maybe we can kind of just imagine what the future organization looks like in, say, let's say, ten years and. Yeah, and so I guess the points that why I'm bringing that up is because what that relationship between AI and humans look like, which you talk about in the book, and the subtitle of the book is, sorry, can you read it out? Tournament. Get it wrong.

0:05:14 - (Tim): How to decide who or what decides.

0:05:17 - (Jason): Yes, yes.

0:05:17 - (Tim): It's basically how to delegate to a machine.

0:05:20 - (Jason): Yes. So that's a really pertinent thing for business leaders to consider. So I think that would be great to talk about as well. So, yeah, to get started.

0:05:35 - (Tim): What do.

0:05:35 - (Jason): You imagine the future organization to look like?

0:05:40 - (Tim): Okay, well, I think you're seeing glimpses of it already in that. Let me tell you a story. So my wife recently bought a new iPad, and she got home and there was a bug. Something wasn't working. And I said, just ring them. She said, what do you mean ring them? Huge organization. They'll never talk to me. So just believe me, ring them. So sure enough, she rings them. Phone answers. Hi, thanks for calling Apple support.

0:06:13 - (Tim): Your call will be answered in two minutes, 38. Do you want to listen to classical, jazz, rock, or silence? She hits silence two minutes, 38 later. Hi, I'm Marjorie. That's Elizabeth. I can see you bought an iPad from us at 230 today to George street store. I'm presuming your call is about that. Is that correct, Elizabeth? She goes, yes, it is correct. And they diagnose the problem in about 45 seconds, and it's completely resolved perfectly.

0:06:52 - (Tim): So that's what good looks like. And it's because, you know, the backbone there is data enabled. And when you ring their call centre, you know, your mobile phone is you. And they're super smart at that. And it is not a coincidence that with that sort of data spine, that sort of customer centricity and help that this is the second, 3rd or fourth most valuable company in the world on any given day. And that's what Apple looks like today.

0:07:32 - (Tim): Now, what will it look like tomorrow? I think we might get that later. But if you're running an australian organization and you hit the call centre and you start asking the customer 100 dumb questions which you already know the answer to, but your systems are so blind, the call center operator can't see the answer, you will be challenged by somebody who gets it better than that. So I think this is the reason I wrote the book, was I'm gravely worried, actually, about how the value here is accrued to the early movers who've used data and analytics and increasingly, AI to create really big advantage.

0:08:21 - (Tim): It's a fun fact, but it's true that when chat GPT and OpenAI. Well, OpenAI via chat GPT came out like 18 months ago. I make the point in the book. I went on to, there's a way in LinkedIn, you can basically see how many staff work for that organization, which I did that, and it was about 500 people.

0:08:42 - (Paul): So 18 months ago, yeah, chat GPT.

0:08:46 - (Tim): Was valued at 100 billion australian dollars. More than that, it was changing the conversation to every organization in the world. It had the fastest adoption curve in the history of mankind, and it had 500 staff. And so one of the thesis in the book is that labor and capital are asset values from another time, and that AI and data is the asset value of today. And so if you believe that's true. I think it's true doesn't mean labor and capital's dead. Of course it's not.

0:09:22 - (Tim): But it is being outpaced by AI and data. And I'd add in IP too, to that. So if you're not in that world, and back to your question, Jason, about ten years from now, I don't think that genie goes back in the bottle. So the asset values of the future are gonna be more aligned to the model of a data AI enabled world and less to large scale capital and labor costs.

0:09:56 - (Paul): Okay, so what does that mean, Tim, in terms of what organisations need to be doing today to ensure they don't become victims of that situation but actually benefit from it?

0:10:11 - (Tim): Sure. So look, again, it's back to why I wrote the book, is, you know, when things change, you need to learn more. So I think it's upon all leaders at the moment to learn about AI. There's incredible podcasts, there's lots of books, there's many low cost courses. Turn your mind to it. Find tools that work for your business and use them and become as close as you can to an expert on what's coming down the pipe. Because, well, if you believe it's going to be the future, you don't want to be ignoring it.

0:10:53 - (Tim): So when we've had changes like this before, there's a famous. I mean, it's really amazing photo of George street in Sydney. It's in the early. It's around about 1907, I think it is round then. And George street is full of horses, just completely packed with horses. Ten years later there's one horse as full of cars. So as the car arrived, you don't want to be a blacksmith. That's bad timing. So these changes, they look like they're. They're unbelievable. Never seen any like it. Well, it's happened plenty of times before when electricity's hit. Electricity scaled really quickly, as did the automobiles. So it's just another big change.

0:11:40 - (Paul): Yeah. And I mean, your background and experience is very much senior leadership level in some big organizations. Do you think that, particularly here in Australia, that boards understand that they need to be doing this? Are they actually making moves to engage or are they being quite sort of standoffish? And what. We'll wait and see. What do you think is happening?

0:12:06 - (Tim): I think largely, I think the zeitgeist is all around AI, and that's not a bad thing. I think there is a lot of attention on it. I think it's a discussion in every boardroom I'm in, that's for sure. And I don't think it's a sense of I'm pretending it's not there. I don't think that's the case. I think it's more. I am concerned about what to do. I am also concerned about the risks, and the risks are real and the trouble with, you know, boards should be worried about risk.

0:12:41 - (Tim): You can, you know, you can allow the worry of risk to manifest itself as inertia. And I think that's a big, that's actually a big risk. Yeah, that makes sense.

0:12:54 - (Paul): No, it does. It does.

0:12:56 - (Jason): And I guess risk is one part of the conversation. But actual new value creation through better propositions to new customers. And I feel like maybe that is also an unknown to a lot of businesses as well. Business leaders, what kind of advice would you give business leaders and boards of what they need to be thinking about, when maybe they know they might be familiar with AI and maybe using some of the basic tools.

0:13:30 - (Jason): But what are some of the things that they should really be thinking about in terms of adoption?

0:13:36 - (Tim): Well, look, I make the point in the book multiple times is try and imagine what would you change in your business if you could understand how your customer used your product in new and deeper ways. So if you have a, like that Apple story I described, what does your version look like of that? So if you could actually know the customers who are using you, some of what they are using, how they are using, when they defect, why did they defect?

0:14:15 - (Tim): And some of this can be analog. You know, I made the point in the book that I spent a lot of time in magazines a long time ago, and we had the most legendary straight magazine editor was Nini King. Nini, if you're listening, hello. And Nene used to work day and night. She spent her weekends at the supermarket checkouts watching women pick up her magazine. She would ask them, what did you like? What did you, and if someone put it back down, she would almost crash tackle them to understand why.

0:14:55 - (Tim): And it is that curiosity, the curiosity around defection. Why did that person not buy the magazine? If you have that curiosity and you can bring AI into it and understand why somebody did not transact with you, if you can harness that sort of signal, you are going to be in amazing shape.

0:15:21 - (Jason): You talk about finding these new signals in your book, and, you know, it's not everybody. Businesses are very aware of the value of data and how, you know, need to be more data centric and data driven and all of that. But one interesting thing that I really got out of your book was finding new touch points or data points that might not be captured yet and using, that's a nice kind of example of getting some kind of information that is going to help drive better understanding.

0:15:57 - (Jason): Do you have any other examples of how you might, a business might find or find these new points of data in all of the customer's experience, because there's kind of information everywhere. And how do you find the useful data? Capture that so that you can actually use it in an algorithm or some kind of machine learning?

0:16:22 - (Tim): Well, I think you've got to create a value exchange with the customer. So if the customer is giving you some information, customer needs to feel like there's real value in that. And there's plenty of examples where that is not the case. But I guess one example, the NRMA, we've built an app called MynRma, and in that app, we've got well over a million customers now using it. And they're using it to redeem discounts through petrol, through restaurants. I mean, I use it, and I can see in that that I've saved hundreds of dollars and it's got a wallet. I can see what I've saved on. So it's, it's a. It's not, it's not immaterial, it's not some trivial discount or something that's just really designed for them to harvest your data and get nothing back.

0:17:19 - (Tim): You know, there's over $100 million of per annum benefits to the customers. So I think that's an example of where the customers are winning out of using it. And that's what we should be doing. That's what good business looks like. Whereas if you're just grabbing a lot of signal, and then there's lots of examples of people create, like loyalty taxes, they're using the signal, the customer, to reprice them badly.

0:17:47 - (Tim): But that's just unfair, unethical. It will get called out and lead you to dark place.

0:17:53 - (Paul): So, yeah, I think we've seen that with the airlines. There have been a number of instances where airlines have been caught out with that, encouraging people to become part of a loyalty program, watching them go through the process of pricing up tickets time and time again. And guess what? When it comes to booking, oh, it's gone up.

0:18:14 - (Tim): Yeah, we do see some of that. I mean, the famous example I referenced in the book, it was an american airline, I won't name them, but it was an american airline very famously. And you've probably all seen the incredible, the very graphic video footage on the Internet where they overbooked the plane.

0:18:35 - (Paul): Yes.

0:18:36 - (Tim): And then they had to get a, you know, they offer a voucher to get people to get off the plane. And of course, no one took the voucher. And then eventually security card comes on and grabs, tackles a customer and drags them off, screaming. I mean, that's just like that. Is the algorithm gone? Mad. It is so obviously that, you know, we can find bad examples there everywhere. Yeah, yeah.

0:18:58 - (Paul): But it's about the intent, isn't it? And I think that one of the concerns I've heard about AI in general is the lack of governance, the lack of guide rails, and the increasing number of instances where AI is doing bad things or bad actors are using AI for bad things.

0:19:19 - (Tim): Yeah.

0:19:20 - (Paul): What's your view on that?

0:19:22 - (Tim): Well, there's a lot, but before I jump to the specific question, I do want to make a comment that the founder of Andy Search, which I'm also an advisor to, an investor in the US, he said, we're talking about. I just love this when you talk about AI regulation, be careful not to confuse that. If somebody, Bonnie and Clyde get in a getaway car and shoot people, it is not the car company's fault. And I think we run the risk here when people regulate.

0:19:56 - (Tim): So you could put brakes on some of this technology. Some of this technology is already finding antibiotics that human beings have missed. Some of this technology is already finding microscopic lung tumors that the doctor couldn't see. You don't want to stop the development of those things.

0:20:17 - (Paul): No, no.

0:20:18 - (Tim): What you do want to stop is rogue behaviour causing harm.

0:20:22 - (Paul): Yes.

0:20:23 - (Tim): And so it does need some nuanced thought about what? About how you're going to do that. And I just come back to, we already have existing laws. You know, if I open a hamburger restaurant and my hamburgers are full of rat sack and someone dies, I will be sued, I will be shut down, and that's good at that. We can. We can control how we serve hamburgers. We cannot control how we serve advertising. I find that unusual and weird.

0:20:55 - (Paul): Yes.

0:20:56 - (Tim): And I'm not the regulator, but I don't think it is impossible to control how to serve advertising. Ultimately, somebody is buying the advertising and you could have some sort of verification of who that is. And if it is some rogue cyber criminal. Well, why are we letting them buy advertising? That's not a question. Sorry.

0:21:16 - (Paul): So do you think that, for example, the social media platforms. Are the platforms the ones who are liable? Was it the people putting the content on the platforms?

0:21:27 - (Jason): Yeah. Is the platform the car?

0:21:30 - (Tim): You would have to. The platform should be in a position to understand who is buying their own ads. And if you are in that position, if you're happy to take their money, I think you'd have a responsibility to work out who they are.

0:21:48 - (Paul): Yes. So then how does that translate into the AI space? Because people will equate the two and say, well, AI is just another.

0:21:56 - (Tim): Well, it's the same thing. It's there. They are identical in my view. I mean, I'm not the regulator, I'm not a lawmaker, I just look at it and go, well, if you could control it for food, you control it for automobiles, you can control it for pharmaceuticals, why can't you control it for media? That's a question that I think people should be asking about regulators and governments.

0:22:23 - (Paul): Yes. So it is really down to the governments and to the standards bodies to start stepping forward and saying, right, we're on this, we are now going to provide some guidance on what we think the standards should be. There'll be a debate about that, but we've got to agree what they are, otherwise it's just going to be, you know, the Wild west again.

0:22:43 - (Tim): Yep, I agree.

0:22:45 - (Jason): Just going back to the first question about the future organisation and how to decide who decides, and using that example of the security guard coming in to wrestle somebody off the plane just so they can leave, you know, that's a great example of someone doing their job. Basically that security guard was told they have no context, perhaps, but they've been told that they need to remove somebody from the plane, so they're just doing their job. Nothing wrong with that.

0:23:18 - (Jason): But there's obviously lots wrong with that. And so that's an example of humans kind of getting it wrong. And now if we're introducing agents, AI agents, for example, into similar kind of business processes, hopefully there's not robots coming in to do the same thing. But, you know, there's businesses or people or even AI telling someone or telling something to do something. And so how does an organization go about making those decisions to determine how, you know, what an AI decides to do? And, you know, yeah, I think you.

0:24:06 - (Tim): Need just some simple north stars. And I referred in the book, it actually comes from one of the co founders of Quantum. He just. He said, he talks about the data Hippocratic oath and the hippocratic oath. You know, I'm not a doctor and I'm not an academic, I don't pretend to be, but my understanding of hippocratic oath is really simple. It just says at first, do no harm. And .1 and .2 is keep the patient's data condition sacrosanct.

0:24:42 - (Tim): So if you come back to AI and you apply your data hippocratic oath and say, I will not do anything that harms anybody in my use of these tools and whatever data I have from a customer, I will keep protected for the benefit of that customer. If you do that, you don't end up with an algorithm asking a security guard to drag a customer off a plane. That won't happen. So I think as human beings, we tend to find these things so complicated.

0:25:16 - (Tim): Just apply some simple things. Bring a data hippocratic oath in, don't do any harm, keep the data. You won't have a use case that involves someone dragging a person off and breaking their tooth as you do it, which is actually what happened. So I think these things are both complex and simple. And if you just come back to some very simple things and find your North Star that aligns with the customer's use, you'll be fine here.

0:25:49 - (Jason): So, you know, I think the Hippocratic oath is a great example, a really simple but principle based that can be applied in most contexts of just doing the right thing. And so, you know, I guess to apply that, if an organization is designing their systems and processes, there's a fair bit of work in thinking about or considering potentially bad things and trying to design against that. So would you say that that is a big part of it, of actually trying to identify risks or potential things that have gone wrong?

0:26:28 - (Tim): And I mean, the risks often, I mean, the hard bit of all this is data curation. So it's like, the more you understand about data, the more you realize it is actually a data curation problem first. And if you have the data platforms, like, look at that Apple example, that system, it doesn't matter where you buy their product, you show up with your Apple ID, they know it's you. They can see, I've got the watch, they know you got the iCloud, they know everything.

0:27:04 - (Tim): So they were born. These organizations are born in the data stream. And it's hard, but it's relatively easy to them. A big australian organization who is born outside of being a data native now has to go back and say, okay, how do we sort our data out? That is hard. Can't sugarcoat it. Take investment, take thought, take time. And I think that's where a lot of the landmines are in terms of, if you collect it badly, you don't have permission to collect it.

0:27:36 - (Tim): You have, you pierce privacy veils along the way, you'll make a mistake. There's all sorts of things to get right there and be careful of. And that's why often I come back to, I often say to people like, you, work out, you're going to build, buy or partner, because when most organizations might have an internal law legal firm, but they'll also get external advice.

0:28:05 - (Paul): Sure.

0:28:06 - (Tim): And you might find you need advice here to get this right. And you, because if you feed bad data, uncurated data into some engine, that's when you end up with the example we talked about dragging the person off the plane.

0:28:22 - (Paul): So you're talking there about the way in which apple, in fact, for some time have been able to do this. Probably before AI became mainstream. They were able to get customer information at the point of contact with the customer to do the kind of things you're describing. But of all the major technology companies, Apple has seen to be the late arrivas with. With AI just literally three weeks ago announcing Apple intelligence. See what they did there. Very smart.

0:28:58 - (Paul): What are your thoughts in terms of why Apple have been late and what might they do next?

0:29:08 - (Tim): Well, I don't know if. I suppose it depends. You say they're late or not, that's for others to judge. I think they're in a pretty good place. They've got the wealthy billions of people on the planet own their devices. I've already earmarked buying the new one because it's going to have the AI upgrade.

0:29:33 - (Paul): Is there something slightly cynical about that? Because if people want to use AI with Apple products, they don't really have any choice. They have to buy the next generation device in order to get access to that technology. Now, I'm not saying they're the only offender that does that because I think Microsoft did it for years and years as well.

0:29:53 - (Tim): I mean, it is a little bit.

0:29:56 - (Paul): Of a slightly bitter taste that, oh, I've got to get another iPhone or another iPad.

0:30:01 - (Tim): No, that's probably fair, but look, remains to be seen what happens, what they end up doing with it. But, you know, I think they're pretty smart bunch. You know, they entered the watch market from ground zero and took it. Basically, they entered the earphone market and took that. I wouldn't underestimate the capacity of them to do something pretty groundbreaking here. And I've heard podcasters in America speculate that perhaps you will end up seeing the new version of Siri, which at the moment is terrible, but a version in the future where you can say to it, check my whole relation for my photos from last year, find me a destination that looks like that, put an itinerary together and it will go and do that.

0:31:05 - (Tim): And at which point, what does that mean for incumbent apps of airlines, apps of travel companies? Apple in the handset is deciding for the customer co deciding with the customer where they're going to have their holiday.

0:31:26 - (Paul): Right. This is killing apps in app stores.

0:31:30 - (Tim): Potentially, you know, I want to go to the airport. It's got my, it can see the plane. You can see my flights booked. It's got my diary. It might decide how I get there.

0:31:41 - (Paul): Right.

0:31:43 - (Tim): I think one of the things people underestimate about AI is, and this, I'm quoting Sachin Adela, now the CEO of Microsoft. He said AI, their version of AI is this is a doing thing. It does things. So you, it's not like you just ask, get a recipe. It'll do that, of course. But you'll say, I want the recipe. Can you go and do the shopping?

0:32:13 - (Paul): Right. Right.

0:32:14 - (Tim): And these are doing engines. Yeah. And we're at the start. This is the dumbest it will ever be. This is the slowest rate of change any of us will ever have.

0:32:26 - (Paul): Right.

0:32:27 - (Tim): And human beings are bad at predicting change, me included.

0:32:31 - (Paul): Yes.

0:32:32 - (Tim): So I don't think it's, I don't think we want to underestimate something that a company like Apple might be able to do here.

0:32:41 - (Paul): And presumably it's not just Apple, but I mean, any of the major software vendors who are in that space. I mean, it could be Google, it could be Microsoft, could be him. Yes.

0:32:51 - (Tim): Could be some. We don't know yet, just to make.

0:32:52 - (Paul): Sure that we're not showing any bias. Great. I've got a question for you, Tim, which is more specific to the sector that I'm in at the moment, which is HR and L and D. So you made the point in the book that HR and L and D, a bit late to the, late to the game in terms of being able to exploit AI, what do you see as being the opportunities there? How can AI help in hiring, training, developing people?

0:33:25 - (Tim): Yeah, sure. Look, I'm not an HR expert by any stress of the imagination, but I do know that if you think of the sophistication, like in a, say, Netflix personalization engine, which will digest billions of hours of viewing around the planet, make a prediction of what you most want to watch and optimize that for you. It will look at thousands of factors. It'll look at, you know, was it romantic comedy? Was it a documentary?

0:33:58 - (Tim): Who are the starring actors? What were the main, what was the music soundtrack? Yada, yada, yada, yada, yada. Thousands of factors. And it'll make a prediction of what you want. And, you know, up to 75% of all viewing content on Netflix is off that prediction engine.

0:34:15 - (Paul): Wow.

0:34:16 - (Tim): That prediction engine is also in their back end work advising them on what to commission. So if, you know, if new documentaries about AI has gone up 160%. They'll commission one. So it's an engine that helps. What does that look like for HR at the moment? You know, they're pretty blunt factors. You know, where the person go to uni. How'd their ATAR go? You know, how was their performance review? You might have 510 15 factors. You don't have tens of thousands.

0:34:57 - (Paul): And a whole bunch of bias.

0:34:59 - (Tim): And a whole bunch of bias. Yeah. He's a good guy. Like him a lot. There's all that stuff in the HR world. And I think, you know, I think back to your question, Jason. Ten years from now, HR will look more like some sort of predictive score of likelihood of success and personalization of how to help them.

0:35:22 - (Paul): Yes.

0:35:23 - (Tim): How to train, you know, what their coach should look like or not physically look like, but what they should say, like. But I think those things were just at the beginning of them.

0:35:31 - (Paul): Yeah. And we know that there was a bit of research I saw from Josh Pershing who was saying that 40% of the skills that we have in y Connor workers today will be obsolete in two years time. So if we're not using that as a baseline for determining, we need to really be investing in training, otherwise, you know, we've got to whole bunch of dumb people working for us. Absolutely.

0:35:59 - (Jason): It kind of begs the question of what the education system looks like as well. What are the actual jobs of the future that we need to be training people for now to actually. And so I guess going back to business, though, what are the kind of skills and capabilities that would be required for employees in the future?

0:36:23 - (Tim): Look, you know, I don't feel particularly well qualified to that, other than to say, you know, really, if. If you are good at understanding what customers want, if you're good at building systems that allow you to understand what customers want, and you are good at making capital investments around customers, what customers want, you are going to be employable forever if you are. You know, I do think there's plenty of sectors of society where there will be churn events. I mean, they were like, back to my horse example of 120 years ago.

0:37:04 - (Tim): You know, parts of the economy don't continue to thrive. That's okay. You know, as long as there's opportunities for those people to get into, they can actually be good. Yeah.

0:37:17 - (Jason): It's just change.

0:37:18 - (Tim): And it's just change.

0:37:19 - (Paul): Yeah. And then being able to kind of map out what kind of a career path people have got in order to remain relevant in the workplace. And that's, again, one of the things that I think AI could really help with is understanding what are the skill sets that people need to start developing now which are going to make them more attractive to employers in 510 15 years time.

0:37:42 - (Tim): Yeah, there's some. There's lots of. There's so many things around AI that are completely fascinating. But, you know, one guy I read was saying how basically, how dumb it is, you know, how to open the doorknobs. Just in this building, there's probably 25 different doorknobs. A human being will walk up, instantly, work it out and open the door. There isn't a robot yet that can do that. Yeah, so basically it's this idea of the super complex the machine will nail, like, you know, finding.

0:38:20 - (Tim): One of the examples I use in the book is finding new antibiotics. Feed it 100 billion molecules, boils the ocean, and we'll find pattern analysis and a molecule that is an antibiotic quickly. But ask these robots to open the door of the average suburban house and it can't. And in a way, that's just the world we're in. That's why thinking about how these things come together and which part of it is the human going to do and continue to do really, really well.

0:38:51 - (Tim): So I don't conclude the book being pessimistic about this at all. I think employment's highly likely to remain really strong through the AI revolution, but there'd be sectors that get belted. That's going to happen, but there'll be plenty of new sectors, too.

0:39:07 - (Jason): And just. So what about, I mean, in the future, where the future business is extremely data driven and automated, what place is there for intuition and kind of that kind of, especially in a leadership level, amazing opportunities there.

0:39:32 - (Tim): If you have better information, you can make better decisions, and the data won't tell you. Every innovation you'll see signal in the customer behavior, which then you can interpret to be something that.

The future of AI and leadership in business.
Harnessing customer insights through curiosity and value exchange.
The need for AI regulation and ethical considerations.
Applying a data Hippocratic Oath to AI and data management.
Apple's late entry into AI and potential market disruption.
AI's potential in HR and L&D for future workforce.