ReThink Productivity Podcast

AI Value Track Podcast - Episode 3

ReThink Productivity Season 14 Episode 3

Send us a text

We weigh the real trade‑offs of building AI in‑house versus buying and configuring proven tools, and map a practical route from pilot to production without blowing the budget. Clear steps on data, governance, ethics, and IP help you create value you can measure.

  • Build vs buy decisions tied to strategy and IP 
  • When generic problems justify off‑the‑shelf tools 
  • Niche bottlenecks and owning differentiated capability 
  • Real costs of talent, data architecture and compute 
  • Governance, scope control and reliability expectations 
  • Data quality, sourcing and security by design 
  • Measurable pilots, baselines and explainability 
  • EBITDA impact, inference costs and ROI discipline 
  • Ethics beyond bias, oversight and customer impact 
  • Partner contracts, IP protection and reuse limits 
  • Scaling blockers across finance, compliance, HR and IT 
  • Regulations to watch including EU AI Act and GDPR

#theproductivityexperts
Register for the 2026 Productivity Forum
Find us in the Top 50 Productivity Podcasts
Connect to Simon on LinkedIn
Follow ReThink on LinkedIn


SPEAKER_02:

Welcome to episode three of the AI Value Track. I'm Simon Heddo, and I am joined by a returning guest, Ed Hogg, who's the CEO of Solve by AI. Hi, Ed. Hi, Simon. And a returning guest, James Boll, head of data and insights at Rethink Productivity. Hi Simon, thanks. Beat me to the high. There we go. We're keen to get going on this one. So we're going to talk today about build versus buy on this episode and what relates to AI still. James, you just want to, I suppose, back to my mission on these three series of keep it simple for people and demystify.

SPEAKER_01:

Build versus buy very simply is well, do you want to build this product from scratch yourself? Or are you going to buy it off the shelf? And obviously there's a bit of a spectrum there because you can buy something and configure it in the middle. But that's the decision you're going to have to make as a business with your AI tool. If you're looking at solving a generic problem that lots of businesses have solved before, or lots of other businesses need to solve, generally you're going to be thinking about buying that in off the shelf. You need something to triage your customer service requests, or you need to sort through your documents. That problem's been solved before. So you're probably going to want to buy that off the shelf. You might have a specific industry challenge related to your business or something actually that's business specific, you might want to do some configuration of that tool. But generally, if you're solving a problem lots of other people need to solve, you're probably better off trying to find something first that's off the shelf and unproven. However, if you're dealing with something that's really specific to your business, and in particular something that's like a strategic bottleneck for your business, it's preventing you from growing and you don't think anybody else is going to be in that situation at all, then you're probably going to lean towards building it yourself, not only because it's not been built and proven before, but also because the IP involved in building that is really important to your business and you don't necessarily want that to be shared around other people. And that's the decision you're going to make. And I think the the way you would make that decision.

SPEAKER_02:

So IP again for those pulling everybody up on acronyms and all those kind of things. Intellectual property. That's what I would say it stood for. Yeah, I'm glad you've said that because I might have got it wrong. Internet protocol, is that an IP thing anyway? So in this instance, intellectual property. So your thoughts and ideas within that business that you want to keep secret secret, monetize, sell it to somebody else, whatever it might be. Okay. So the alarm bells in my head go off when we start to talk about bespoke, because I'm thinking that sounds expensive, that sounds tricky, that sounds like I'm going to need a load of really intelligent people that are going to cost me a lot of money. Sounds difficult to maintain. So my my head goes to there must be some really key decisions you've got to make to go down the bespoke route. I mean, I'm sure you've got an opinion. We'll start with you, James, and then come to Ed.

SPEAKER_01:

Yeah, I mean, you've just hit the nail on the head, really, with the challenge that leaders face when they're thinking about AI. Is if you're going to build yourself, you need to make sure you have the correct level of expertise within the business to do it and to do it well, which might mean hiring lots of PhDs. It might mean investing really heavily in your data architecture, and it might mean investing really heavily in your IT infrastructure because you might need some really hardcore computing power to do what you want to do. And so it is a lot of upfront cost and it is a long-term investment. Whereas buying something and trying to configure it, you can probably get to the solution quicker. However, as an organization, you want to be building your organizational expertise in this area because we believe it's part of the future and it's going to be part of our strategic advantage. So we want a team that's comfortable in it from the from the off. And so there's a balance to be struck in in that decision. Um, Ed, I don't know if you would add to that.

SPEAKER_00:

Yeah, I I think for me you're looking at solving more niche problems when you're building yourself. There are some great tools out there that you can buy in. There's no point in us trying to compete with ChatGPT on a large language model or go out go after developing our own artificial generalized intelligence models, which which we've not we're not at yet, but it would cost us trillions to trillions to get close to. But there are some very niche problems that are very focused within our organizations that we might be able to deploy. And there's tools that that help us do it out there. And what what we're looking for is is that kind of sweet spot of is this niche enough that it only really applies to us or uh us and a very small number of people around us, and therefore it's going to be a a better trade-off, a better bottom line effect for us to build it in-house rather than try to buy it outside.

SPEAKER_02:

Or if I was spitting it in a positive way, there's a problem that you could solve in a bespoke way that you could monetize and sell onwards. Oh, yeah. Yeah. So there might be a different angle of we see this as a wide-scale problem and we want to solve it in our bespoke way and then monetize it moving forward. Because I suppose the other way to look at it is if it's bespoke, it adds value to that organisation because it because it's theirs back to the IP debate and however you do it. Okay, yeah, got it? Makes sense. So I will underline this and it's directed at yourself, Ed, without getting technical. When organisations think, okay, I understand the problem, it's really niche to us, or we think it's got you know bigger legs and we want to have it uh by ourselves for ourselves for the time being before we sell it. From an architecture point of view, is that a big round of decisions, meetings, things that need to be considered? You know, what types of things will people need to think about when they're saying, yeah, tick, we're going bespoke, tick, we know the niche problem we're trying to solve. You've got some tools and things, but what what does it sit on? Where does it live? What does it, you know, what does it look like?

SPEAKER_00:

I think the first thing you need to do is is have an oversight committee, which is a just uh business jargon and not really useful, but you just need to have a group of people who are gonna overlook how the model's performing at all stages from a training point of view, from a performance point of view when deployed, looking at it from an ethics point of view. Yeah, we could we quite often come across those problems. Um so you need a group of people who are able to step in and go no, because if you train it in the wrong direction or train it with some data, you could get some some effects you definitely don't want. You've then got kind of the you you want a strong leader of the project, and they need to look at your architecture and work out a way that technically you can embed it with your data.

SPEAKER_02:

Because it's got to live somewhere, right? It's got to live somewhere. It's got to live in a database or whatever on a server somewhere in the cloud. Um it's sure my technical ignorance here, but you know, yeah it you've got to be able to access it in a safe and secure way, is my minimum kind of.

SPEAKER_00:

And some of these models need a lot of computational power to run. You could need to use AWS's biggest uh Amazon Web Services. Sorry. You have that there before me? Biggest server or Google Cloud's biggest server in order to run for a very small amount of time. So you you need to put it somewhere that you're able to move the data into and out to, uh out of that meets your informational security officer or whoever in your organization is responsible for data's threshold or of data security. The other thing you need to think about is the skill set that you have in-house. So one of the architecture things you need is an architect. And coming back to what James James said about PhDs, you need somebody who is clever enough and and able to manipulate these tools in the right way. And thanks to those they are getting more expensive, but they are out there, these people, and and you can can find them and get them to come and work for you on a very focused project if it's interesting enough. And most niche projects are, most data scientists love niche projects because they're they're pushing the boundaries of of what's been done before.

SPEAKER_02:

Okay, so I'll come to you in a second, James, but just let me simplify those steps. So it needs to you need to understand where it lives. I think there's probably a point to make of you talked about Amazon Web Services, Google Cloud. You need to try and understand the cost because I assume you could start to run these things, be using massive computing somewhere, and then you get the bill through and go, Oh my god, I didn't realise it cost me 10 grand for those two minutes of computing. I'm making the numbers up, but it in relative terms, there's a there's a cost when you're pulling that resource. So we know where it lives, we've then got our governance in place, a typical project working group with the right people, the right skill sets, looking over all the bits, and then we've got a really clever people that are going to start to play with the data to help solve the bespoke problem that we're building this for. Yeah. Yeah, that summarizes it really nicely. Perfect.

SPEAKER_01:

Does that fit with your understanding, James? Have you got any other points to add? Yeah, I mean, the only thing I would add is that most organisations, I don't think, have got much of that in place when they when they start. And so it's the same with hiring the talent. You just start looking at it now because you will need at some point. And a couple of tips like most of your legacy data will need some kind of curation and cleansing before you can use it. And the cure for bad data is not more bad data. And that could be years of bigger data. Oh, absolutely.

SPEAKER_02:

That could be, you know, if we think WF workforce management, I got there myself, I corrected myself. We think workforce management WFM, for those in the know, you know, you kind of need two years of data as a minimum, and people struggle with that, even if it's as binary as till transaction data, it's like it's got holes in it, gaps. So some of these bigger projects might need three, four, five, six, I say ten. We've had a pandemic in there and stuff, but big, big chunks of data at a potentially very granular level.

SPEAKER_01:

Yeah, yeah. And it can be inconsistent, you know, it could have several different ways of categorizing the same thing. It might be labelled incorrectly, it might be duplicated. You really need to be on top of that, and you need to have kind of change control mechanisms as well to ensure that you're managing it effectively. The other thing I think you would think about is where's the data coming from? Because customer interactions with your business naturally generate data, but also you might be able to get data from vendors, from suppliers, you might be able to get data sourced from third parties. How do you integrate all that information together? How do you connect it? And how do you keep the bits that need to be private, private? You need to consider all those things as well. So, I mean, it feels big, but the but the best step is to just start doing it.

SPEAKER_02:

Yeah, so there's almost subsets of let's call AI the program, and then there's lots of projects within the program. So there's a data project, there's a where it's gonna live project, there's a what's the cost mechanism look like, what's the hiring the talent, etc. So the it it's big stuff, but you can break it down into manageable chunks to get through. Okay. So we we do all that, let's cast our minds forward. We've got something we can use and work, and as ever people want to do a pilot, and uh does it work, doesn't it work, how quickly can we go? People tend to get stuck, I think not in AI, but generally in pilots. Sometimes it's because there's no motivation, sometimes there's maybe not stuff clear. So, Ed, in your experience, how do you un unblock that stoppage? How do you or even better, stop it, stop it happening full stop and set people up for success in those pilot situations?

SPEAKER_00:

Well, the first thing you have to do is measure where you are right now before you even jump into a pilot. You have to understand what you're trying to improve, you have to get your baseline. Once you've got a baseline, the next thing is is having a clear, clear measurement of what success looks like and where you are tracking towards that, constant feedback on that, making sure you're reviewing that project speak is probably 30, 60, 90, but just when it's deployed, when you go through having those clear gateways of right, we're now going to look back and make measure our KPIs. The other thing is to make sure that it is measurable. Yeah, when you're building bespoke stuff, when you're working with people, you need to have an overview of right, actually, can I measure what impact it's doing? Can I measure what it's doing? Explainable AI is quite a problem. And it's getting easier and easier to do that. You need to take advantage of those tools.

SPEAKER_02:

Trevor Burrus, Jr.: Explainable in terms of what's the same as the same thing.

SPEAKER_00:

AIs are traditionally a black box. They are a group of new neurons within a black box, which is a which is effectively a decision point within each AI which is making which is making a decision, and you've got these thousands of different decisions going on in there, and it's very hard to see how someone's brain is working, and it's the same problem with AI. But there are tools out there which can tell you this is why the AI made that decision, or this is why the AI did that.

SPEAKER_02:

So this is kind of justifying how we got to the answer almost. Or people wanting to daisy chain back to say we put this in, we got uh this answer, but actually I want to understand how it got to the answer. So it mathematically, you know, one plus one equals two, we can see the daisy chain, we put one and a plus and a one in and we got two out. But it's clearly far more complicated than that.

SPEAKER_00:

Yeah, yeah, there's some there's some really good mass without getting too geeky and bringing game theory out for a second, yeah. There's some really clever tools out there that that are able to say the reason that the AI thinks that this picture of a dog is a dog is because 98% of the input was recognised as dog and 2% of it looks like cat. And and it's able to show you the breakdown of what how the reason the reasoning. The reasoning, the the modeling, the models thought on it, and therefore it output outputted the the right one. And then I think a lot of the people who this podcast is targeted at are thinking about kind of a top-level number. They're thinking about eBit DAR, they're thinking about profit, thinking about reporting up to an executive committee. And I think it's worth reviewing how uh at some points throughout the project. How is your project going to impact eBit DAR? Yeah. Yeah, Deloitte says 68% of projects that are done as Nietzsche and bespoke have a e-bit data increase associated with them. It's really worth making sure that you are uh you are tracking that and you are part of the 68 rather than part of the 32.

SPEAKER_02:

Back to computing costs, that could absolutely kill your business case if run it the run cost you hadn't factored in, or it's more because the data set's bigger or the cost of computing goes up. There's all those things that can, I suppose, take you off kilter.

SPEAKER_00:

The inference costs, the cost to use the AI on a daily basis, are are yeah, they they tend to have the biggest impact as to whether you're gonna have your Evidar win. And and yeah, so it's really worth monitoring that.

SPEAKER_01:

Yeah, and it's worth thinking about it before you've even started the pilot, because the biggest pilot killer I think is probably finance saying, you need how much now? If you've not got a broad sense of how much it's gonna cost to deploy and maintain and operate up front, then it might be worth taking a step, making sure you know that before you embark on the pilot. Because you talked in episode one about challenges being cultural with rolling these things out. Often within a culture, the the people that has the biggest say-so, yes or no, is the finance director. And yeah, if you're suddenly going and asking for a couple of million pounds because you want to return three or four million, then you're gonna have a complicated conversation if you've if you've not expressed that upfront and got buy-in already.

SPEAKER_02:

I suppose it leads us nicely into that transition from proof of concept to production or rollout, depending on how you phrase it in your organisation. And you you touched there, James, around some high-level governance control, so understanding the the full cost of the project almost before you embark, or at least an indicative view so that people know what they're signing up to as you get there. And you know, those will change. That that's life. Things go up in price, not particularly down in price, and assumptions are assumptions, so we can crystallise some of those and start to be more accurate. But are there any other things that you've seen or you'd you'd ask people to think about to mitigate some of those really difficult conversations at the point of this looks good, we think it works, we think it's got legs. All right, we need a six-month break now before we can roll it out.

SPEAKER_01:

Yeah, well, I mean, I think Ed answered this question really well in his his last answer. I would kind of build on what he said around the ethics. I think you need to establish for your organization what your ethical position is around AI, and that is, you know, how much oversight do we need of the decision making that Ed talked about? It's how are we going to check for failures and how are we going to react to them. And remember, when most people think about ethics in AI, they think about bias in decision making, which is really important. But there's also ethics involved in if your AI isn't making the right predictions all the time, that has an impact on your customers. What are the ethics of that? I think you need to have these discussions up front and have them early in some detail and establish how you feel about it in order to not have that pain later. And you definitely don't want that pain a year and a half after you've rolled it out. And then the obvious point, I think, around communicating continuously. You talked about in episode one people wanting to be involved in these decisions and know what's going on, you need to be doing that.

SPEAKER_00:

I think I think for me, with regards to governance, scope creep is the is the classic project problem. The silent killer. Yeah, I I think and the same is true with AI. If you try and you define your scope, only give it the data that's relevant for that, train it on that. If you then try and ask it to do different things, you sometimes, but quite often, always, need to give it more data, whether that's you are asking it now that something that requires weather data or moon cycles or whatever you need to add, you do need to add that data in, you do need to throw it in there. And that's that tends to be a retraining or an additional training cycle, and you lose the the eBITDA wins, your ROI. The other thing I'd focus on as well on governance is reliability. Yeah, we have these SLA use externally with suppliers, but SLA. Service level agreements of the amount of uh you're challenging me there. Uh I had to think. Yeah, the service level agreements where you say 99.9% of the time this has to be up. We're rolling it out to staff internally, even if it's only a small organisation, you need to make sure that you're always up and therefore you need to have reliability as part of your your governance of your project.

SPEAKER_02:

And I struggle a little bit with the word pilot because it kind of gives the connotation it's optional at the end. So I think lots of organisations, pilot really is phase one rollout, it's test and learn, a couple of a couple of locations, whatever are going to feel the pain, and then we're gonna move through. Do you agree? Or does is pilot typically a or stop then if it doesn't work? Do people stop?

SPEAKER_00:

It it's hard with a lot of the build projects to do a pilot. You are because the cost is loaded. The initial cost is is is quite a lot up front. I mean you can build something as a research project and it not work and you just cut your losses at the end of it, but it's it's not a pilot, it's a build and then a deploy stage. And I think that one of the potential options for using somebody else is that they've been through the build phase, you don't have to pay for that. Yes, there are perhaps longer the the long run costs of it are increased, but the upfront investment is so much less, and and that's often a decision you need to make. And maybe if there is a tool out there that you can use for that, then then you should go the other way.

SPEAKER_02:

So yes or no answer only allowed now. I'm the host, so I can set the rules. Does a pilot have to be perfect before it goes to production or rollout?

SPEAKER_00:

No, there's that's it.

SPEAKER_02:

We're done.

unknown:

No.

SPEAKER_02:

Thank you. I agree. Go on, I'll let you carry on. There's no such thing as perfect with these things. Exactly. So again, the point you made, Ed, keep the scope tight because that can cause that creep of or will it we'll have an extended pilot or we'll have a phase two. So keep the keep the scope tight. When you've achieved your, I don't know, 80%, is it the 80-20 rule? Actually, that might be good enough because you can carry on, carry on to get to 90% to get little return back to your e bit R ROI numbers and even 95% to get marginal gains. That's the bit that 20%, I think, is the bit where sometimes people fall in the gap of trying to get a bit more, but it actually in relative terms they've got most of the benefit already to go.

SPEAKER_00:

With AI like most projects, Moscow, the must, could, should, would kind of stuff. The must, defining what your must is, is is effectively your scope here. Yeah, yeah, and everything else, if it does it, great, but yeah, it will continue to learn, you can continue to improve on your own.

SPEAKER_02:

You need strong leadership to enforce that because otherwise everybody falls back to wanting to wanting everything. So we talked about IP before to intellectual property. You've got potentially partners that are coming in to work with you, you've got new hires teams. So, how do you go around kind of protecting that? If if you're employed, there's typically clauses in contracts around the IPs with the business, it's not personally, etc. But there must be a risk with this moving so quickly that people's ideas all of a sudden appear in a different tool, a different bespoke platform that are very similar?

SPEAKER_00:

Yeah, difficult one. It it's often case by case, IP. Uh it's what intellectual property are you providing? What you often don't want to do is bring in a partner to train on your data and then your competitor use your data to get ahead. So you don't want your competitor to be succeeding because you outlaid the capital to build the model. So definitely protecting that is is really important with AI. I think very clear what is the what is being used as a training training data set and what's being used as a deployed data set, what is uh and what is available to the public, and what what that needs to be clear in the contract. You need to be very clear as to what is the information, the data, the architecture, the model, the current training state of the model that is being brought into the relationship from outside from third parties and what's being uh used here. There are some very public examples where over the last couple of weeks, as we record this, that your IP is is available or that people have had pretty high-level security risks from publishing information inadvertently to the to the web and it's been taken advantage of by actors who who are bad, and therefore, yeah, you really need to be cautious of that. But I think for with IP, you just have to make sure you you spend the time before you start working out what what is in scope, where is my data going to be used, where is and making sure that you maintain the rights to your core data not being used to benefit your competitors.

SPEAKER_02:

Yeah. That makes complete sense. Clearly, take legal advice is the the underlying disclaimer in that piece.

SPEAKER_01:

Don't use chat GPT to check the contract. Well, maybe as a start point. I mean, I would argue most businesses will be partnering with people to help them build bespoke stuff to begin with, because they won't have the organizational capabilities to do so. Yeah, and so you need to build that partner relationship from the off and know what what terms you're going to be dealing on. And again, it's it's one of those conversations, even if you're thinking two years down the line, start talking to partners now.

SPEAKER_02:

And a good partner should have that conversation in the early days because they should be used to the conversation, they should be used to people, so they should almost have the the answers packaged and however they deploy it ready to go within that model, right? So that's good. And then if we're going to build bespoke chains and we're starting to scale it, how do we stop the organizational blocker? So how do we stop the organization when it becomes more well known, slowing it down?

SPEAKER_01:

Yeah, I mean, you talked in episode one about cultural challenges being the big the big problem and and this being change management, um, which I think is is really key. But London London Business School uh published something talking about the four horsemen of the AI apocalypse being the four things that will stop your project from scaling effectively. One is finance. Yeah. You run out of money.

SPEAKER_02:

Run out of money is not too much more than we thought.

SPEAKER_01:

Yeah, or we can't evaluate this effectively. Compliance is another one because it's very difficult to understand the risks inherent in an AI tool. And I don't know if Ed wants to come in on that in a second. HR, because they've not hired the right people to enable you to scale it, or it's hard to find them. And lastly, IT not getting the underlying infrastructure in in place. You know, data needs to be managed collectively. You need standards and processes for data collection, data curation, and you need catalogues and ways to manage what's in there. And those four, those four departments can all block something from scaling because they've not done that work, which then goes back to the original conversation of when you start out out on your pilot or your proof of concept or phase one rollout, whatever you call it, you need to be thinking down the line, okay, what's this going to look like? And engaging people from the start.

SPEAKER_02:

I don't know if you've got any builds on those, Ed.

SPEAKER_00:

Yeah, just I I agree with everything you just said. Uh, you summed up pretty well the the compliance stuff, it's really important to engage your your chief informational security officer in in this stuff from from the very beginning. There are some very easy things that you can do in compliance to ruin it, particularly with with regards to ethics, with regards to uh exposing your data in an uns unsecure way, making making some pretty there's it's pretty easy to make some some pretty catastrophic errors if you if you don't just follow some basic basic rules and some basic steps around around data security and about AI security. So I definitely spend some time with with somebody who's trained in that area.

SPEAKER_02:

And and ethics seems to be one of those things where there's there's sort of laws or EU directives, and you know, again, it'll always be slightly probably different in the states. So that that again is something to keep track on because it seems to be something that's a moving target all the time.

SPEAKER_00:

Yeah, the AI EU Act was passed very recently and as is law now, and is something that you do need to be aware of if you're deploying any of your models in inside the uh EU. Not as prevalent in the United Kingdom, definitely not as prevalent in the US, different laws in different places. You need to make sure GDPR is and California workers' law and all those kind of fun laws that you need to be aware of. But they all essentially mean you need to protect your employees' data, never provide more information out than from personal data than than is needed, make sure you're training on uh training sensibly, you make sure that you're always within a scope and that there's a defined reason for using it. So I think those are all the kind of key things you need to take take care of. And if you are doing those, then the others should should come along. You should be able to get HR and finance to to buy in because you're following some very sensible principles. Yeah, compliance doesn't and governance doesn't have to be a hindrance, it actually can be a help sometimes. Sometimes perfect.

SPEAKER_02:

Well, on that note, I thank you, Ed, for your contribution again and James. Again, key takeaways for me are back to the first part of this conversation in in podcast three around being really clear of why you'd buy or why you'd build, and then that kind of sets you off on a on a path with all the other things we talked about. So hopefully some really good insights there for the people watching and listening. If they want to find out more, Ed James, so James Bowl, Ed Hogg on LinkedIn. Edward Hogg. Edward Hogg. So Simon Heddo, we're all on LinkedIn. I'm probably the least uh favourable of you to reach out on any technical or AI advice. Ed and Ed and James far better than me, but we're we're all there. Feel free to contact us and connect. And thanks for watching and listening.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

ReThink Productivity Podcast Artwork

ReThink Productivity Podcast

ReThink Productivity