ReThink Productivity Podcast

AI: Insights from Ed Hogg

ReThink Productivity Season 1 Episode 152

Send us a text

Get ready to increase your understanding of artificial intelligence with insights from Ed Hogg, CEO of Solved by.AI. From his beginnings as a Formula One engineer to setting up Solved by AI. You'll learn about the fundamental distinctions between AI and machine learning, and how high-quality data is the linchpin in achieving accurate AI results. Plus, Ed discusses the incredible complexity behind advanced AI systems, including groundbreaking examples like the development of the COVID vaccine.

We also dissect the leading AI chat models currently shaping the industry. Discover what sets ChatGPT from OpenAI, Gemini from Google, and Claude from Anthropic apart, as Ed highlights their unique strengths and the ongoing advancements that promise even greater capabilities. Whether you're looking to leverage AI in your business or simply curious about the future of technology, this episode offers a compelling, accessible exploration of the AI landscape.

#theproductivityexperts
Register for the 2025 Productivity Forum
Find us in the Top 50 Productivity Podcasts
Connect to Simon on LinkedIn
Follow ReThink on LinkedIn


Speaker 1:

Welcome to the Productivity Podcast. Today, I'm delighted to be joined by Ed Hogg, ceo of Solved by AI. Hi Ed, hi Simon, thank you very much for having me. No, you're more than welcome. So a topic that is, or should be, at the forefront of anybody in any business's mind is AI, and we're going to get into the detail and hopefully keep it simple for those that are entering this realm and hopefully keep it simple for those that are entering this realm. But before we do that, let's find out a bit about you. Do you want to give us a kind of Ed's career biog?

Speaker 2:

Yep. So I started life as a Formula One engineer working for Mercedes, and then went to design race cars for Bentley up in Crewe at their beautiful factory up there. While I was there, I was responsible for working on steering racks on the car, as well as a large number of other components, and we introduced some very simplistic AI into how the steering rack works, and that is where my love for AI started. So once I'd finished designing that particular car, I left that business and went to go and work in workforce management and discovered that in workforce management there is an awful lot of data about rotors, about sales, about demand, and I was able to build a number of ais off the back of that. Um and uh have since grown soft by I to be a company that solves problems for people using artificial intelligence, not just in in the workforce management space, but in in a wider space entirely, in kind of retail, manufacturing and hospitality excellent, so you'll have been.

Speaker 1:

The time of recording was just after the british grand prix. You'll have been pleased, not pleased with how yesterday went in the old world very happy as a mercedes fan.

Speaker 2:

My, uh, my family, my wife, still work for mercedes. So yeah, um, very happy, very happy that lewis finally won another race after 945 days. Good, not that you were counting, not, though, not that we were counting in this household good.

Speaker 1:

so let's start and, as I said, we'll try and keep this simple for people that are venturing into this world or have seen bits online, it's all over kind of LinkedIn and Twitter if you follow the right people. So just talk to me about the difference between AI and ML machine learning.

Speaker 2:

Yeah, so AI is a really broad field, so it's kind of about creating systems that can perform tasks typically requiring human intelligence, like problem solving and decision making, and they are often comparative things. Is this person who they say they are? What information do I require? Um, machine learning is the subset of ai, and so that's where you train an algorithm on historic data or data that exists within the database, um, in order to enable systems to learn and make predictions, uh or uh, make it, help it make decisions or predictions, specifically without explicit programming. So where you let the system learn and grow, and we either have a couple of different types within that so we've got two that are common, so we've got supervised where you say here is the answer yes or no there is a finite solution of what data is available and we're trying to mark the correct answer or not. Or we have unsupervised learning, where we let the system look at what was the input and what was the output and let it make decisions and grow on its own.

Speaker 1:

And that's what most people start to think of AI as it's learning over time and getting better and better so the more, the more data you feed it, the more ability it has to learn, and the more times it does it, the more accurate it gets. Is that the general theory?

Speaker 2:

yes, it is. Um, yeah, the you want to give it good quality data, so the more good quality or the more right answers you give it, or the more good wrong answers you give it, the better it gets. We obviously have a world where there's especially in things like Twitter there is a lot of misinformation out there, and so it's really important, when we're training our AIs, when we're using using machine learning, that we give our system really high quality data that is correct or not correct, in order to get the outputs that we want and not just noise or fake, fake results, fake news.

Speaker 1:

Yeah and is it as simple as treating it like google to type in a Google search of you know, find me the best X, y and Z. Or should we just stick to Google if they were kind of working in a more binary way?

Speaker 2:

Yeah, so, um. So with ChatGPT, bard, gemini, those kind of ones, you can act like that. You can use it to search, to find the right answer, to guide it in the right direction. With some of the other applications, you do need to use data science to evaluate the accuracy of the model and to work out what is the best solution or what is the best answer for it or what is the best answer for it.

Speaker 2:

We have examples such as the COVID vaccine, the mRNA Oxford-Asteroid virus that was developed by AI, by a group of people from the University of Oxford, and a spin out from that, and that was an accuracy problem where they had to specifically look into the data, go into the metadata, try to understand it, and you need to have multiple phds to be able to understand it. That's the most complex type of ai we have. Um, a lot of the ais that we we work with are really trying to make it much more simple for for people to be accessed. So that's why chat gpt has taken off. So much is because it's the first ai where you can just treat it like google and get the answer, as it's learning to do multiple different functions, so kind of a multimedia different system.

Speaker 1:

And you mentioned some of the kind of names there. So the landscape seems to be growing almost every day with new things, but we've got maybe the common ones people have heard of, so ChatGPT, bardstroke, gemini, which is the Google offering, there's Sora, there's Anthropic, there's Kling AI or King AI I've seen today. There's all sorts of ones that seem to pop up, but there's a plethora of systems that you can start to use, some free, some limited, limited free, some paid for out there, so that that's a challenge for people like myself who are kind of starting to look at this and and value its importance. But what kind of? Where do you jump in?

Speaker 2:

yeah, so, um, the three kind of biggest out there are claude, which is from from Anthropic, gemini, which is from Google, and ChachiPT, which is from OpenAI, and those are the most used and they're also the broadest trained. So they've been trained on the most data and, to go the widest, claude, which is the Anthropic answer, is trained to give us a much more narrow answer set than the other two. It's meant to be as ethical and as sensible as possible, so you can use it because it's the most ethical. It also is designed intentionally to run in more ethical ways. It uses better server processing technology than OpenAI. It uses a range of different other tools that are available.

Speaker 2:

So you can think of Claude as kind of being the most ethical one, kind of from chat GPT. That is the best to giving you just direct answers. If you want to have a conversation with something in order to get an answer, that's the best one. You can think of chat GPT as the know-it-all in the office. You can turn around and ask it to do things and what's going on and what it's up to, really quickly and it's really good for a daily user. But it's not as accurate as Gemini.

Speaker 2:

So Gemini is definitely the most accurate. It's also fully integrated with the most uh tools, so you can really get high quality uh google doc products, so uh, the google sheets and that kind of stuff. You can also get really good integrations between gemini and microsoft products, although chat gpt is is definitely starting to move over there, having been acquired by um. But the Gemini product suite is pretty useful and it's already in what you do, so if you have the Google app on your phone and you can search for it, it's really useful on that side. So Gemini is kind of the most accurate. It does come with some caveats in that it is the most data intensive, so it's something times like 50 to 50 times more energy expensive to use Gemini to answer a question rather than just using Google on its own so it's so lots of choice and, having played with probably all of the ones you've talked about, I've experienced some interesting results, let's say some probably accurate results.

Speaker 1:

Some points where it still looks like and feels like it's in its infancy. So it either doesn't work or there's a misconnection, or it can't connect to what it's connecting to, I assume, because so many people at this point are playing with it. So is it fair to say this is on a journey and there's momentum, but we're going to see that snowball effect where it gets quicker and faster and better and more accurate in a much shorter space of time.

Speaker 2:

Yeah. So if you think about the kind of background of how they work, they're basically training off the internet. So they're training off of just heaps of data that they've scraped off the internet, whether that be every song lyric that's ever written, all of Wikipedia, lots of Google search data, all of the Google Analytics data that goes into um gemini's users all of this data that it's it's trained off of, which is trillions of records, um, so it's an absolutely massive data set, and they then have to filter that for what is useful. What they have been really good at on all three of those kind of areas is um the initial filtering, so they've managed to remove anything that was written in gobbledygook or windings for those that used to use the random word um fonts. But now they have to start being more accurate, so they have to uh, get it to a new level.

Speaker 2:

At the moment it's about the level of a graduate from university. Now we're talking about trying to get it to be a phd student from university in every single subject possible, and that's what most people are expecting chat gpt 5 to be. So we're at chat gpt 4? O right now and chat gpt 5, which is due um about 14 months time. The target is for it to be able to complete a phd in every single subject possible, and that's that's the kind of intelligence and that's the kind of trajectory you're going on. You're effectively, with three years, through a seven-year degree with these things now, and they've still got four years worth of degree to go, but it's going to happen in 14 months yeah, and so that that must be a benefit in terms of, like I say, pulling all, pulling all that information for the internet.

Speaker 1:

But back to the point of feeding good data, there's, as Donald Trump would say, lots of false news on the internet as well, so I suppose there's a danger that it has to start to be more contextual in understanding what's true and what's false, which must be quite difficult to train true and what's false, which must be quite difficult to train.

Speaker 2:

Yeah, it's. Uh, the underlying the reason that it's going to take so long for chat gpt5 to come out is because they have to solve that problem and um, some things you can get experts to come in and give an opinion on. So medical data, for instance. If you ask chat gpt some medical questions, it's pretty high quality on that because it uses um high quality journals only to train off of um rather than just uh, uh kind of stories that it's it's got of. This is how I fix things.

Speaker 2:

But um, there is a lot of uh chat boards and other things out there that it needs to remove the wrong answer from um, but it also wants to learn from the right answer.

Speaker 2:

So going through and reviewing up likes on things and those kind of problems is a noisy data set. So, yeah, it's the biggest problem and um, it's actually it's part of uh, the new eu ai act, um that they have to make uh public. Uh, from the next uh chat gpt launch um and the same for claude and gemini when those the new ones for those come out. What's the uh data in question that they're using to train? It is and that will be really interesting, because there's also a lot of uh data that companies feel they have rights for um, such as music back catalogs, et cetera that they don't want, that some companies don't want included in the chat GPT model, and so trying to increase the accuracy of the model while taking into account all of these rules is a challenge that they're overcoming, but it's a challenge that all data scientists are trying to overcome the kind of protection around data are trying to overcome the, the kind of protection around data.

Speaker 1:

Yeah, then it's kind of back to that when you see the list of the predicted jobs that this will they'll impact. It's all those what I'd call binary or more binary ones. So medicine law things where there's a not a definite answer, but there's certainly in law. You've got all your reference studies and all the historic cases to make judgment calls on and test cases. Medicine, clearly, you've got symptoms that you can narrow down into potential issues and you might you might get closer. You might end up with a cluster rather than it being a very wide one, through to more opinion or grey based facts, which I suppose, like you say, depends on where you're pulling the data from, whether it's blue, red, pink or purple. Uh, and yeah, you've got, in terms of the new matter that's coming out of that hole. I think it was scarlet you're handsome, wasn't it, where they asked to use a voice a couple of times and didn't, and then the voice.

Speaker 2:

The voice was allegedly not hers, but very similar yeah, it's uh scary uh in terms of how they were able to get a voice actor to be half close to her and then use an ai to combine scarlet enhancer's voice and that that is a. Uh, that's an interesting lawsuit to follow um as to what happens on that front. But in terms of the um, the, the kind of jobs that it's going to take, the, the it's definitely not going after. If you are a laborer, if you are a um, an ai is not going to be able to lay a new driveway for anybody anytime soon. That is not. It's not going to be doing that job. It's going after office jobs.

Speaker 2:

Um, that's the, that's the space that that it's in um at the moment. It is not clever enough to do those office jobs though. It is about enhancing um though the people that do those office jobs, to allow them to make the decisions where there is the knowledge, the local area knowledge that is really useful, and remove the need to write needless emails from them by just getting chat to write the bulk of the text, by just getting chat gpt to write the bulk of the text, by just putting some notes in.

Speaker 1:

That's the kind of automation and speed process that you can expect right now yeah, and I kind of see it as if you can, rather than it, take jobs, if you can free up people to focus on the intellectual stuff, the, the stuff that makes a difference. And then in the background you've got these bots running that's doing the, you know, connecting to your supplier x to check that your invoice has been paid, or to raise an invoice, or to do the, to do the stuff that can be automated. You can use your people in a different way. I don't necessarily see it as something that decimates the workforce and everybody's laying driveways in, in that I think if you're smart, you can harness the power of both to get the best out of both.

Speaker 2:

Yeah, I think one of the expectations is that customer service is going to improve with this, because managers are going to be especially in retail and hospitality setting, are going to be able to interact with customers because their time is going to be allowed to do that, so they're going to have time to interact with customers because their time is going to be allowed to do that, so they're going to have time to speak to customers, handle problems in a much better way than at the moment, where they have to fill out 40 pages worth of documentation in an evening and 39 of them are purely for head office use, and managers are going to be able to manage and actually interact and build relationships and do those kind of things. So, um, we could, we could, uh, quite conceivably see a significant increase in the level of customer service that people get off the back of this good, that's good here, and other thoughts in terms of where this is heading in the future so one of the big problems we have is power.

Speaker 2:

Um, I appreciate this is probably not the podcast for uh, whether nuclear power stations are useful or not, but normally there is a. There is going to be a power problem where, as chachi, pt, bar, gemini, increase, um, there's already laws in california preventing how much power that they take from the grid. Um, this is being deployed globally around the world and uh, it's going to get to a point in the future where, um, if we go on the current trajectory, where we need more power than we're able to produce for this, plus electric cars, um and so there's going to be some legislation that needs to come out to handle that um and uh. So what you will see is things like Gemini Mini, which deploys locally to your PC and runs it off your PC only, which doesn't take as much power and runs as if you're running an Excel spreadsheet. So that is the expectation, where you have locally trained or downloadable neural networks and trying to build, reduce the amount of power in them. If they can solve that problem, then it's the.

Speaker 2:

The future for um ai uh, especially within the kind of operational and hr space, and the um, especially retail and hospitality, is in making sure that you just have the uh right. You, you are optimizing. It's all going to be about optimization, about uh reducing unnecessary losses within the business by solving the simple low-hanging fruit, and you're going to have a highly optimized supply chain, a highly optimized workforce, and they're going to be able to do their job well and focus on the things that really add value to your business, rather than periphery the red tape.

Speaker 1:

Well, maybe we ask AI to solve the power problem. Surely that can come up with some of the ideas.

Speaker 2:

Yes, although some of the answers it's going to give are going to be from scraped off Twitter, so you're going to get a range of different opinions there happy days.

Speaker 1:

So if this has kind of helped people understand a little bit about what its potential is and how you can navigate it. If people want to get in touch with you and explore the conversation further on a, on a just a theoretical level, or get involved in some of the stuff you're doing, it's solved by ai. Where's the best place for them to get in touch?

Speaker 2:

uh, yeah. So um, uh, drop us an email at info at solved by dot ai um and uh, yeah, we'll get in contact there. We're also um on the uh on all social media in terms of linked, so hit us up there. And we have our own podcast series which is fresh, called the AI Leadership Podcast Series and, yeah, which you're going to be a guest on very soon, simon.

Speaker 1:

I am indeed.

Speaker 2:

And we love to have chats about AI and carry on the discussion about AI there. Brilliant, and we'll link all AI and carry on the discussion about AI there.

Speaker 1:

Brilliant, and we'll link all your profile through on the show notes so people can click to get in touch, just to make life easier. So, ed, I always enjoy chatting. Your brain's the size of a house, so I know it's a challenge to take me on the journey with you, because I don't really understand it. But thanks for simplifying, thanks for being really clear and concise in terms of where we're at and where we're heading, and we will catch up soon. Perfect. Thank you so much, simon.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

ReThink Productivity Podcast Artwork

ReThink Productivity Podcast

ReThink Productivity