ReThink Productivity Podcast

AI Value Track Podcast - Episode 1

ReThink Productivity Season 14 Episode 1

Send us a text

We strip away the jargon and show how to use AI as an assist that lifts productivity, trust, and measurable outcomes. From data readiness to culture, governance to first use cases, we focus on steps that turn hype into results.

  • Clear definition of AI and human‑in‑the‑loop
  • Where AI adds value across knowledge work
  • Culture over tech as the main blocker
  • Data readiness and training data basics
  • Governance to reduce shadow AI risks
  • Leaders modelling tools and setting objectives
  • Selecting first three use cases aligned to KPIs
  • Measuring adherence to AI recommendations
  • Balancing efficiency vs productivity for trust and ROI
  • Practical rollout: owners, feedback loops, training



#theproductivityexperts
Register for the 2026 Productivity Forum
Find us in the Top 50 Productivity Podcasts
Connect to Simon on LinkedIn
Follow ReThink on LinkedIn


SPEAKER_00:

Welcome to the AI Value Track Podcast. We're going to be demystifying AI on the request of our customers. We're bringing together the right people, and I am in charge of dumbing it down. No jargon. We're going to explain everything. We're going to keep it simple. So this is podcast one in a series of three, and I'm delighted to be joined by Ian Hogg, CEO at ShopWorks, and Ed Hogg, CEO at Solve By AI. Hi Ian. Hi, Simon. Edward, nice to meet you.

SPEAKER_02:

Hi Simon. Thank you very much for home for having us.

SPEAKER_00:

So we're going to keep it simple for people. That's that's my key responsibility. So if we start talking jargon, I'll pull you up and get you to explain. This is about demystifying and keeping it real. So in terms of why we're doing the podcast, what are listeners going to be able to get out of it and what's what's the hope?

SPEAKER_01:

Well, I hope you know we want to share some of the experiences we've got of helping customers implement AI solutions, really. And what I find with the customers we speak to is they're they're nervous to start because they they you know it's just a lack of awareness or knowledge. Um so hopefully we can share some of the things we've learned and some of the mistakes we've made and some of the pitfalls we've come across, but also some of the the good tips on how to do it right.

SPEAKER_00:

Perfect. And it's your day-to-day, Ed. What are you looking to help people with in terms of this series?

SPEAKER_02:

Yeah, I think Ian covered it quite well there. I think one of the key things is to get across the the kind of steps that you would take, the steps you need to take before you start an AI project, what you should be doing during, and then how you make sure that you're taking the most advantage of it and really driving that ROI. Perfect.

SPEAKER_00:

So I'm gonna start really, really simple. AI, artificial intelligence. Ed, do you want to just give us a little synopsis of what that actually means? It's everywhere, people are talking about it. What what is AI?

SPEAKER_02:

So AI is the concept of a machine learning to be intelligent about a concept, so therefore it is artificial intelligence. Essentially, it means that a computer has encoded in itself or in some in some code how to solve a problem, and then it is when you go and ask it to solve that problem, it is assisting that. It used to be very narrow and really narrow to be or two seconds worth of tasks, and we're now out up to about an hour's worth of human work. It can be done better by AI rather than by a human. Perfect.

SPEAKER_00:

So let's take that into the business place, Ian. Where are the areas that businesses are most open for AI use?

SPEAKER_01:

Well, I think you know, AI is a tool. So expanding on what Ed was saying is really it's they're just computer tools that sort of run themselves and you know make decisions themselves. So anywhere where there's knowledge work, it you know, is open to productivity improvements for AI. So uh when the recent ChatGPT launch, which is the sort of leading you know, software model. One of the tools, yeah. Yeah, probably the leading, most well-known model, well-known tool, they said that coding was their, you know, software development was that one of their key objectives to be able to sort of you know revolutionise that. But people are using it in you know, legal for contracts, uh, finance, you know, documentation, you know, making podcasts, it's it's really anywhere where somebody can a knowledge worker could do, could could deliver.

SPEAKER_00:

So cast our minds forward ten years, then we're all going to be redundant, there's gonna be no work for us to do because the machines are running it all and they're all speaking to each other. It is a scenario that people all have in their mind. I personally don't think that's true. Is that something you see, or is it is it an enabler or is it a replacer?

SPEAKER_01:

I think you know, ten years is a long time to forecast. Five minutes in this world seems a long time. We we we we are I encourage people if we're doing a sort of an implementation project to have as like a three-year vision. But you know, so certainly 10 years it's it's it's moved so fast in the last two years that yeah, it is possible. But I think right now and in the next year or two, it's it's an enabler, it's an enhancer. We sort of talk about it as like an assist. You hear the phrase co-pilot. So, you know, so we we we encourage for our scheduling tool, we call it like a scheduling assist. It doesn't do everything, it assists you and makes you more productive, and then hopefully, you know, at the moment, we it hopefully it takes away the real drudgery of work and then lets people focus on the more value added. So I think for the next year or two, that's definitely where we're at. Could it get better than that? You know, probably nobody knows. Nobody knows.

SPEAKER_00:

Probably is the answer. And it it I suppose to summarise what you've said there, it's helping helping people making decisions, it's not making the decision. Is there going to come a point where it will make the decision, do you think, in certain scenarios?

SPEAKER_01:

I I think they call it human in the loop, you know, so that but which means that the AI does something, it presents presents you know some information to uh you know a human that makes the final decision. So and the the AIs are fallible, yeah. I suppose just like we all are, but you know, that you hear these there's a phrase called hallucinations, but you know, you have to check the output. And so, you know, when how long before the I you know you might have another AI checking the output of the first AI, that that's you know, people are already starting to experiment with that stuff. But it's it's fallible, it remains fallible, so you have to have humans check check stuff and input into it.

SPEAKER_00:

So anybody watching this, listening to this, who sits around a boardroom, it will be on their agenda AI, whether it's a question mark, whether they're doing something with it, whether they think they should be doing something with it and and aren't everybody's talking about it, you can't escape it. It's on the tele, it's on your phone, it's it's everywhere, right? So in your world, Ed, where do most organizations get stuck? Because there's two things at play, right? There's tech, and we've talked about tools like Chat GPT, there's Gemini, there's Grok, there's all there's all these tools there that you can type something into and ask it to do something, a pretty picture, a code in an app or whatever. So is it is it the tech that people get stuck on on what to do, or is it the change that people get stuck on of? So now I've done something with it, what does that drive? Uh it's definitely change.

SPEAKER_02:

It's a culture thing. It's if you look at some of the most recent surveys that came out only only last weekend, 98% of people want to be involved in the decision at the place that they work when it comes to implementing AI. So there is an organizational want to be involved, but we aren't deploying it in the right way, and that's where where products are projects are getting stuck. You've also got the, from a cultural point of view, kind of data readiness. So, according to some organizations like Hubble, you're only about 8.6, 10% of businesses out there are ready for AI in terms of their data set, and therefore being data prepared, spending a lot of time making sure your data is in a usable format, somewhere where it can uh be easily accessed, where you can train it in a secure way, those are the things that people are getting stuck on.

SPEAKER_00:

Training data then. So again, for the uneducated like me, what does training data mean?

SPEAKER_02:

Yeah, so AI is learn based off of historical patterns, and so what you need to do is have your data ready so that you can put it into an AI. AI is thinking ones and zeros, and then like computers do, they've just learned the patterns for ones and zeros very well. And so a lot of people don't have their data in a place or a way where it's very well structured, they haven't labeled things correctly in data sets. So yeah, you've got everything being labelled as pints rather than perhaps identifying them as the type of pint they are, if you want to do that kind of stuff, zero alcohol versus uh alcohol, those kind of things. So those those kind of data set training problems really cause that. So there's a culture about having your data prepared, but there's also allowing managers to be in the right place to to make those kind of changes.

SPEAKER_01:

So sorry, Ian. Yeah, I was gonna I was gonna just uh build on that one, Simon, because I I think that you know one, I think it's like the biggest change management project of all time, you know, like every business is is is uh whether they like it or not, is gonna be using AI.

SPEAKER_00:

And and some probably are without knowing. Yes.

SPEAKER_01:

It is the reality. Yeah, and if if you know your staff are using it at home, even if you you haven't given it to them officially. Yeah. So and we what we find, so we've done quite a lot of implementations of forecasting and then workforce management scheduling. And those when they fail, they fail for cultural reasons and poor, you know, poor, poor, not poor management, but you know, like lack of setting the you know, setting the expectation with the team, bringing the team along with them. And so I think it's definitely a culture, it's a culture challenge more than it's a tech. We haven't had any failures because of tech. We've had failures because or you know, delayed projects and then having to restart and go around again and try it. We've you know, it comes down to as sensitive as rebranding what you're calling the project. And like we were talking earlier about using the word assist rather than like auto-schedule sort of implies that the AI is gonna do everything, and assist, schedule assist implies that it's gonna help you, you know, it's gonna help you. And those cultural differences matter, you know, and and one says you as the store manager aren't involved, and the other one says you are involved, you know. So so yeah, it's cultural, definitely.

SPEAKER_00:

So senior leaders then, how do they bring their teams with them? So in an organization, you'll typically have the the senior team that buy something, decide to go that way. That then trickles down, doesn't it, through the hierarchy, down to store, restaurant level, whatever the organisation is. So what what things should they be thinking about? You've mentioned a few there, but are there any other other points you've got that they should be thinking about?

SPEAKER_01:

I think you know there's there's been quite a few studies where you know if senior management aren't living and breathing it and aren't involved, you know, like leading from the fund, setting an example, then it it's likely to uh to fail. But I think probably the the probably the key is setting objectives at the start, you know, making sure you know making sure that the you know the making sure that the the whole team are aware of those objectives, you know, and they're aware of their role in it and they're aware of why you're doing it. Because I think there were there was another survey, I think it was the same survey Ed was referring to, where most employees, employees, more than a third, have zero trust in the company's AI plans, you know, and and two-thirds just don't trust the company at all.

SPEAKER_00:

Do they know it, I suppose, before you trust it?

SPEAKER_01:

And be if they're not if you leave a vacuum, like you're you're not sharing with your employees what it is you're trying to achieve and why, then the that vacuum might get filled with negative, yeah negative responses or suspicion or lack of trust. So you you need to fill that very early, and it's a it's a key part of many projects.

SPEAKER_00:

And most colleagues in an organization will have Gemini, Groc, Chat GPT, all three on their phone. So again, you could be in situations where your colleagues are more armed with AI tools than actually the business itself.

SPEAKER_01:

Yeah. Well, I there's a there was a study done, so 90%, you know, the of the people in this study, 90% were using AI at home, you know, or personally, and only 40% of the company, 40% of the radically supplied by the company. And you know, so that's another thing. You've got managers where uh yeah, I've I've I've heard it's you know, in fact, we this was we were guilty of this until we bought the licenses. So up until you buy the licenses and train people and you know give them some boundaries on what they what data they can and put into an AI, they are likely to be just doing it from home, yeah, uh, on on the on the selves and putting your data or the company's data into uh into a model where they don't know where where that data's gonna end up.

SPEAKER_00:

I was gonna that led me to the next question, really, of where where does that data go? Is it the same in all the different models? Is it different by model? Are I giving my data to train something else? I don't know.

SPEAKER_02:

Yeah, it depends. There has been some some pretty public leaks very recently and for some of the large language models. I guess you have to just be do your research on it. So there are there are enterprise editions that allow you to restrict where that data goes, where where you can take a copy of the code for want of a better word, and tr only run your model on that without it seeing the rest of the internet, without it having to do that. But at the same time, you aren't getting the full effect because the AIs are learning all the time, so you want to be able to expose it. So do your research as to which which one to use. There are more and more enterprise editions which have high quality security on them and make sure you're doing that. The other thing, just to just to add to the the point that you guys were just talking about, I think measuring adherence to the AI, as in the AIs make some really good recommendations, they make some decisions, but until you, as a leader, measure how well your team are adhering to what the AI recommends, you're not going to see that that true ROI from what you're getting. So I think measuring of AI adherence, whatever the subject or decision is, is quite useful, useful metrics as for a leader to ensure that they bring their team along.

SPEAKER_00:

Yeah, so it kind of leads us nicely on to the my kind of next point. The internet's full of cats dancing on the moon in a skateboard with a bloke behind on a hoverboard knitting, you know, I'll make it up, but it'll be that'll be there somewhere. And and that's all neat and fun and nice and and all that kind of stuff. But doesn't really work in in business. So how do we pick the first three user cases from a business point of view we want to do and then define what success looks like?

SPEAKER_02:

I think you have to align with KPIs first. And if you are a business that is focusing on, let's take you being a retail business for a second, if you're a retail business and you're focusing on increasing sales, you can't go and deploy an AI that reduces your staff hours or is focused on reducing your staff allocation in a in a store in a in a location because they aren't aligned. And that will seep through. You won't have success in the projects because you you haven't aligned with what your your cultural goals are. So that's that's the first thing. Yeah. The second is that you really want to focus on what you have the data for. So coming back to do I have the data, is it in a format where it can be trained now? How lot much effort do I need to get there? And then the third one is is more focusing on your I your ideal workforce group. So I think what what are you looking for your workforce to do? What are the bits of their jobs that they enjoy? And what are the bits of their jobs that are just mundane, uh that they really aren't learning anything and they don't want to, they don't want to do anymore that you can help replace because that really does lead to quite a high quite a high adoption rate on that kind of stuff. Keep them narrow, make sure that they're very focused projects and and make sure your first three use cases are focused, are going to have a good good return, you've got the data for them, and that you're able to align them to your KPIs.

SPEAKER_01:

I I just adding on building on that, so I think what Ed's referring to there is that there are some big projects where, you know, uh, you know, I don't know, optimise stock, you know, for and use AI to increase, you know, reduce the amount of stock in a in a warehouse. Those sort of projects, they're pretty, you know, they're they're big, they're easy to set some measurements against, you know, there's and that they often those are the sort of projects that often get a good ROI. Yeah. At the other end of the spectrum, there's also a bit of experimentation required. So for instance, we we you know, as we're trying to get adoption in our uh our company, like there are you know, we're giving people chat GPT, we don't yet know all the use cases, they'll come up with them. You know, they're more creative and innovative than I am. So actually, if you give people a 20 bucks per month ChatGPT license and say go play with it, and these are the rules you can and can't do, they're gonna come up with examples that are that are, you know, some of them might be great and some of them actually don't save any time at all. So I think and they're you know, and they they don't those individual tasks and little projects that individuals do with those tools, it's really hard to you're not gonna have a you know you're not gonna set KPIs in advance or or you're just gonna let them go and experiment. And for those, I think it's best to have a like a shared look, look what I've done that's cool, and I've saved half an hour a day, you know. So I think there's sort of two ends of the spectrum, these big set piece projects that Ed was referring to, where you know the you know the management are trying to get one percent off of some sort of big metric, and then there's you know, share it, share the share the tools around and and share the best use cases, and then hopefully something positive will come out of it.

SPEAKER_00:

So kind of play and learn type. Yeah, and just experiment between the teams. And the there's a phrase shadow AI, which again you'll have to explain to me that we should be avoiding, apparently.

SPEAKER_01:

Yeah, I think that's what but what I was hinting at earlier when I was saying that if you don't supply them the tools that they will, you know, people will use them themselves. So and and funnily enough, a lot of you know, a simple project, this is how we did it internally in Shopworks, is we did a survey, you know, what tools are you using yourself? Uh is the company paying for it? Because you find you know these tools are they're not massively expensive, they're£20 here, and you know, and if you've got a software engineer on you know£50,£60,070,000, then a£20 addition to make them a few percent more productive is a is not a huge investment. So what you find is that you get a proliferation of these tools, but quite often the the shadow AI a bit is that they're they're implementing them themselves. They've they've got their own license or they're using the free version. And the way to the you know, it's it's not the most complex project to run. You do a survey, find out what people are using, find out what they need, do a little risk assessment on on what tools you do, and then then you know pick your official tools, buy them for people, and then train them. And I think the the key bit there, the final bit is training. It's I think you know you know, cut quite you hear stories of people sign up for a load of ChatGPT licenses, nobody's using them because nobody's been trained or encouraged to use them. So just to summarize the summary, the shadow element is people using their own tools outside of the sort of control or guide guidelines of the company. But potentially using company data in it. Yeah. And there there are risks to that, considerable risks.

SPEAKER_00:

And in terms of before rollout ed, are there typical point stages that chat list that people should think about before they start adopting, building on Ian's point?

SPEAKER_02:

Yeah, not to be boring, but data again. Uh data is is one of the one of the most important. Aside from that, you you need to have an owner, you need to have a somebody who's owning the project. It comes, it's classic project management, really. You need to have an owner, you need to have clear communications internally as to what the success criteria are for rollout, also what's going to be happening within it, build trust within within the uh within the organization. And then you need to have one of the quick checklists I have is I make sure that we have a planned feedback point. So these AIs can learn at such a rapid rate. You have to take advantage of that. You have to, if you're implementing in a in a factory, go and ask the people on the shop floor if they're using a tool, is it successful? If you're working in a on a cruise ship, you need to go and ask the passengers whether they're liking what whatever you're doing. You need to go and ask these people as soon as you possibly can to try and get the learning into it so that it can learn quicker, so you've got better adoption. So I think those those are the things that I would focus on.

SPEAKER_00:

Any any tips, Ian, or Ed again jointly, for avoiding avoiding failure on top of those?

SPEAKER_01:

So I think select the right project. I know that sounds obvious, but it if you know, you almost want to go for the low-hanging fruit. So if I was, you know, if I was coming into a company and somebody said, right, we want to start adopting AI, I you know, like I said, there's those two ends of the spectrum. If we were on the the sort of bespoke end where it's a it's a big set piece project, I I'd be looking for the for to try and get what what they call a halo effect project. I get a big win. So pick the project that's most likely to deliver that, get everybody behind it, get the high priority, and then you the organization will learn from that project, but also it'll be inspired by that project. Because if people are running around saying I've you know saved two percent of my stock costs or you know increased sales by one and a half percent, then funnily enough, other people are getting enthusiastic about it.

SPEAKER_02:

Yeah, I think for me realise that it's assisted. Yeah, it's it's about it's an assistance tool, it is not replacing a workforce, it's yeah, use it, use it to assist your workforce, don't use it to replace your workforce. So, yeah, enhance, not replace. I think I think is the phrase. And I think that as long as you can you do that, you're gonna bring along your staff, you're gonna drive uh drive an ROI within your business, and you're gonna introduce people and use that that halo effect, as Ian was talking about, to really increase the the adoption of AI within your organization.

SPEAKER_01:

Well, one of the one of the phrases that get quite gets talked about a lot in the space is whether you're using AI for efficiency or productivity. So really what they're what that's that's saying is if you're using it for efficiency, i.e. there are managers going, right, we can put the AI in and we can cut staff costs, you know. And it's no wonder you know staff don't trust the motives. Yeah, the machines are coming. And then the other one is is actually fine, we've if we take the same team we've got now and give them all the right tools, how much more can we do? So how much better, you know, can we sell more? Can we build more product? Can we can we you know deliver more podcasts? Whatever it is that your output is, if you use AI, uh, you know, if you if you make all your whole team 20% more productive, could you get 20% more growth from this relatively the same cost? Or do you want to take 20% of the cost out by putting AI in? And as you'd expect, there are managers that take both, and that that is a that is a sort of fundamental debate that goes on, and as you'd expect, different companies will take a different approach.

SPEAKER_00:

Yeah, and some might want to hybrid a mix a mix of both, but current trading environments are tough, you know. Everybody, whatever you're in, retail, hospitality, manufacturing, consultancy, whatever, everybody's cost base is going up. So saving your way out of it always feels like a relatively blunt tool long term, short term might be, but actually that freeing up time to drive service, better conversations. If you are in a physical environment, drive average transaction value to your customers because getting new ones is tricky, so selling more to the same is is probably the sweet spot you'd want to see it used in, I guess.

SPEAKER_01:

Or give them a better service so they don't you don't lose them. You know, retention is uh good thing. Absolutely. The the the CEO of Shopify did a quite a famous memo to his staff and um I say famous, famous amongst AI geeks. So he did he did a memo and said, you know, before you create any new hires, consider whether that job could be done via an AI. And one of the things that is is probably happening is is people are hiring, you know, I think it's already having an impact on hiring, particularly at the more junior levels. There's plenty of data out there to support that. So I think where people are getting the the cost saving is they are they're not necessarily letting existing people go, they're just not adding to the you know, they keep it they're putting they're using it to put headcount freezes in whilst generating growth. Yeah. So yeah, that would be a hybrid version of it, you know. But the the objective is still to generate the growth. And of course, it ought to be more profitable growth if you've managed to not have to add too much more staffing to generate that growth. Perfect.

SPEAKER_00:

Ed, Ian, thank you very much for joining us on episode one. I think my key takeaway is you've got to be thinking about it, you've got to be starting, but the underlying data is just paramount. So making sure you've got the data in the right format, and again, it speaks through even if you're not on the journey at the moment, you've got to be preparing that data because at some point you're you're going to be on the journey. But it sounds like there's a wealth of opportunity and productivity efficient gains to be had. Some are there, some are maybe a bit behind, some are starting to get ahead. But thanks for your insights, and we'll speak to you on future episodes. Thank you, Sonan.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

ReThink Productivity Podcast Artwork

ReThink Productivity Podcast

ReThink Productivity