Episode Summary
The episode "AI Across" delves into the evolution of artificial intelligence (AI) from expert systems to today’s large language models (LLMs), with a focus on its application in engineering and manufacturing. Hosted by Michael Finocchiaro, the conversation features Dr. Bob Engels from Capgemini Engineering, who shares his extensive experience in AI spanning over two decades. Capgemini is a global leader in management consulting, technology services, and digital transformation, with a strong emphasis on leveraging AI to drive innovation across various industries.
During the discussion, Dr. Engels highlights several key insights. He explains that while modern LLMs excel at handling large datasets and complex tasks such as natural language processing, they often sacrifice some of the transparency and explainability found in earlier systems like expert rules-based models. Additionally, he emphasizes the importance of edge AI—AI capabilities implemented directly on hardware with small footprints—to process and analyze data locally before it is sent to centralized servers. This approach can significantly reduce latency and bandwidth requirements, making it particularly relevant for real-time applications in manufacturing environments.
For PLM and engineering professionals, the key takeaway from this episode is the need to balance the advanced capabilities of AI with the necessity for explainability and correctness, especially when dealing with critical systems such as those found in aerospace or automotive industries. Dr. Engels suggests that while LLMs offer unprecedented opportunities, they should be complemented by more traditional deterministic approaches where precision and reliability are paramount. This hybrid approach will ensure that engineers can harness AI's power without compromising on the quality and safety of their products.
Full Transcript
Dr Bob EngelsAnd. Hi, it's nice to meet Bob, Bob Engels of Capgemini Engineering. This is my inaugural webcast. I'm calling this AI Across.
Michael Finocchiarojust had a reorg. You just had a big reorg, didn't you? 8 or 9.
Dr Bob EngelsYeah, also that, but I work from group level, I'm agnostic to them. to the whole, all that stuff. Gotcha. I'll fix that on the, yeah, I'll fix that on the thing. So I wanted to talk to Dr. Bob today. You've been around AI for quite a long time and you're one of the spokespeople for CAP in this area. How is your view of intelligent systems in terms of the evolution from expert systems and machine learning to today's LLMs? Well, what you would see, Michael, what I've experienced in my life, I started with AI in 1998. So I really went through a few rounds with that. I would say AI is going full circle in a way. So we started with like expert system, rule-based system, pretty deterministic, rule-based, but they could also explain what they did. Two connectionist models, because we were a bit lazy, we wanted to learn from the examples directly. We lost some of the transparency and some of the explainability along the road. Capability became better and better and better. So in the end, we could actually do a lot of tedious tasks out of my, like the deep learning networks that are very good in data pre-processing, the feature selection, all the stuff that we needed to do by hand. And now you see that that deep learning went further to LLMs, transformer algorithms. even larger data sets, we learned the trick for vision, we learned the tricks for sound, we learned the tricks for speech, we learned... So we're very good in imitating these kind of processes now. But we lost a lot of the correctness and the determinism in that sense. And what you see now is, there is a lot of speech about hallucination, but it is basically error handling, probabilistic error, that they try to turn down by using crisp letters again, like knowledge graphs, like...
logically correct knowledge like crisp knowledge just to make sure that these LLMs follow the tracks that are actually qualified. And that for me is a bit like full circle. It's also interesting that rather than a really technical computer geek language like Lisp, now the language is English, right? It's also a big transformation, think, especially with vibe coding, the programming language is just English, which is kind of crazy. I'm sorry, I forgot to ask, so what's your role at Capgemini? As my role at Capgemini, I lead the global AI lab. So that is actually cross business line, cross region, cross sector. So it is really the application of AI, the technology of AI. What is the technology able to do? What do the businesses need? What kind of data requirements are there? What kind of infrastructure governance do you need? And of course, last but not least, what are all these new, incredibly cool possibilities that you Right, and all these new startups. If you look at the start-ups, mean, the world couldn't be greener than now.
Michael FinocchiaroOh, I know. It's crazy. This funding of N-TOP and Bright Machines and Sight Machines. It's crazy. So, far as since you're seeing across lots of industries, what's unique about applying AI to engineering and manufacturing compared to say finance or healthcare, CPG, know, other industries that less around discrete manufacturing? Well, the interesting thing is with manufacturing and FS, there are some things that are very similar actually, the preciseness, the need for determinism. If you want to calculate your salary twice, you don't want to have different salaries. And the same holds for an airplane. If you want to build two wings, you want them to be pretty compatible. So there is a lot of things that are very similar in these two areas. The real trick is now with these new capabilities that we got. There's multimodality, there's multilinguality, there's LLM capability of knowledge, interpretation and we've got gurgitation. What can you do with that? Where does it fit and where does it not fit? Where should you go, where should you not go? And that is the quest at the moment. That's what we're trying to find out. That's where all the research is aimed at, isn't it? And many businesses are really trying to left to right in the middle, up down everything, just to try to find the sweet spot where they can actually use this great success. And that's an exciting trick.
So speaking of data, how well do you think today's LMS actually handle the kinds of data that I encounter in the engineering world, CAD and bombs and requirements documents is kind of an easy one because that's just language, structured files like CAD files or bombs. Well, we do have an interesting observation there because actually when I joined Capgemini, I had an AI lab only for the Nordics in the start. And there we had some customers that wanted to do document management for big engineering construction constructions, complex constructions. And there is a lot of blueprints and that kind of data, license data, product sheets, specification sheets. and they needed to be analyzed. So what did we use? This was 2019. We used GPT-2. And it's not kidding. We used GPT-2. Why? Because the blueprints, there were these huge pictures, but they had these tables, you know, with this. And then we related to the numbers. With GPT-2, the fine-tuned GPT-2, that actually worked. We had one very bright guy in our organization that learned how to fine-tune these models, and he could actually do that already, pre-chat GPT. We'll see our.
Dr Bob EngelsSo that was one thing. But your other question was really about what else can you do with cat drawings and so on. The thing is that cats, but also blenders, there's a specification language behind it. And the same with architecture, if you do business process modeling, you use Archimede for your business process models, for example, with architecture, you can actually also learn the language behind it and use that to regenerate pictures and images. We actually got this request from our engineering department. Can we give it the first go? We need to build whatever new car. We described the car just in a Word file with maybe some drawings. And then the system makes a cat, initial cat cam drawing out of that. Or sorry, cat 3D cat drawing out of that. And that actually works to a certain extent. mean, it's perfect, it saves you days of work. So from days you go to hours. And that's cool. Cool. Where do you see the LMS fitting this best today? Is it more like just doing search, summarizing stuff, code generation, optimization? What do you think is the most relevant application? Well, I mean there is a lot of single applications, but let us just have a look at which capabilities really are useful and at the moment I would say there's two things that are really flabbergasting with this technology. The one is the multimodality. So the ability to put pictures, sounds, text, speech, only one thing. We have never had that before. You can take a PDF file with an engineering drawing, with a specification table. All in one.
maybe even with the sound file integrated, and you can actually integrate the whole thing. And that is really new, and we use it a little. This is something so new, we don't really know how to use that. Okay, so yeah, I can see, and in general, we could do more multimodal stuff, looking at the model and then connecting that. Exactly. And connecting the dots. And the other thing that is really new is the multilinguality. even, I mean, you speak French with an American accent, I presume. I speak English with a Dutch, Norwegian, German accent, whatever. If I speak Norwegian, I have a Dutch accent, I use a German word. These systems are still able to interpret that. And I am so amazed because you and me are old enough to remember Dragon Naturally Speaking. And I never managed to train it on my voice so that I could dictate the whole email with it. Because with no effort, it is just correctly interpreting what I'm saying. that mistake.
Michael FinocchiaroI found it amazing, use Otter.ai a lot. And even like you're saying, when you have someone with a foreign accent speaking English, other than the only thing is like acronyms, but then you'd have to train it on what these acronyms means, what picks up the right acronym. But other than that, it's absolutely incredible. So I'm really amazed with that. So since you work with lot of customers, what are some of the biggest data maturity gaps that you see today in industries that are trying to adopt GEN.AI? Because I think that the biggest problem is the data and data maturity. That's a very good point. Let's be honest, if you use a foundational model, everybody who is as good as prompting as you do can be your competitor. There is nothing that gives you a competitive edge. You will still in the future also depend on your own data. Your own data gives you your own space and it gives you a space where you can be better, more precise, easily integrated in your sector. You can be better than the others. So you need to do something with your own data. So where's the maturity? I the data ecosystem has been the issue for the last five to 10 years. That's why we had the data measures and that's why we had all these things coming up. That's still just as relevant. I mean, be a good father or mother for your data. Make sure that your data is growing up with you. Because otherwise you will lose your competitors.
But how, in terms of the customers that you're engaging with through CAP, where are the most, do you think you see companies are mostly like say level two or three and they're aiming at four or they're all like one and two and looking, thinking that tomorrow they'll be five, it's all over the place. very much depending on the company itself. It's not even a sector that's further ahead than another. So the company by company. Where you don't expect that you have organizations that are really focused on their data ecosystem for the last five years and they have pretty much done their job. mean, you will never be ready with data. Data changes. But now we get the whole realm of data from the edge. So you get all the sensor data. In addition, we get more and more of that type of data, which we also have to integrate. Yeah
Dr Bob EngelsAnd what I do with engineers is often, know, the Tire 1 to Tire 3 discussion in engineering, I add the Tire 4 to it, which is the foundational one, it's on the outside. So you get the completely new capability on the outside, this Tire 4, that is sometimes integrated by Tire 3 suppliers, like SAP, Salesforce, and you name it. But this is bringing a new capability, but also new risks into your whole ecosystem. Absolutely. It makes me wonder when you're seeing the best company, the best examples of companies that are succeeding at that, are they having a hierarchical structure with a data line of reporting that's separate from IT? From my reading, seems like that one of the problems we've done is we've assumed that these data things are just an IT problem. And I don't think they are. They're a business line problem, but it can't be owned by the business line because they're only going to look at their own interests. So you really need sort of an independent CDO. So what you ideally want, Michael, is really a data ecosystem and an information architecture that is independent of the type of use. Independent of the technology, definitely, but also independent of the business life. So you need an information architecture that is generic enough to actually, on the one hand, resolve the data lifecycle of an object. And on the other hand, make sure that you can hook all that stuff into it. For the system too.
And there are solutions to that. There are very interesting architecture patterns for this kind of architecture. But I'm thinking more of the people problem of how do you ensure that there's a, that the ex-com, the execs understand that and then they make sure that that doesn't become politics. But my experience is that if you really have a good story around this information layer, everybody understands data. And most companies are technology businesses now. And a technology business is driven by data. And that's where we are in many cases. I mean, I don't really know many examples of technology businesses that are not driven by data. That is what you do with AI. And you make decisions based on your data. You make analysis, analytics based on your data. Absolutely. So the interesting thing there is that on the board level and on the workflow level, an information architecture that explains how your data object runs through your organization and how it's governed, you can hook everything up to it because you can talk about the production of it. You can talk about the publication of it. And you can build an architecture that allows the ones that are publishing or using it to your clients. that they got a lot of freedom with what type of technologies they use there without destroying your information architecture. And that is, I think, one of the things we see in many data ecosystems that the whole architecture of the technology and the solution architecture is too much depending on the final use, which means that as soon as the users change, like they do now with all this AI that comes, you will have many new use cases. You need to redo your data ecosystem. You don't want to do that. So you need to be more generic.
That is also, I think, one of the reasons that this whole idea of knowledge graphs is coming up more and more. Yeah, I agree. And do feel that it's easier to do this from a green field than from a brown field? Or is it the same problem? It depends on... Well, if you're from a green field, have the advantage that you can do whatever with your architecture. If you're from a brown field, you at least have data. So there's a trade-off. question is if you have to start building your data landscape from scratch and you have to collect the data from scratch, that's a big issue. And we are far enough now that AI is also starting to help us with transforming the ecosystems into something more and more. And so that fits well. What should companies stop doing when they say they want AI in the product pipeline? What are they doing wrong when they say that? When they want AI in the...
Michael FinocchiaroWhen they just say, we're just going to do AI, things, what pitfalls they should do. Yeah, this is the question I've actually got for 25 years now. My PhD started with that, with the German carmaker. literally the question in my PhD statement was the board wants to do something with AI, explain us what we could do. And that question is still very much there in many organizations. So, I mean, it's not a question whether you should use AI or not. The question is really, do you want to automate? And I think you should. And automation can be much. It doesn't need to be digital. Sometimes a conveyor belt between hackers is already an optimization that does more than anything else. But once you decide to do something digital, then you have to decide where do I need automation that is flexible, not flexible, deterministic, non-deterministic. You need to map what your needs are and then you can find out what types of AI you can use and which kind of infrastructure you need to put in. Well, I was saying like what kind of pitfalls, are sort of the things like the directions they shouldn't go like they shouldn't say, I don't know, like I just I'm just gonna I'm just gonna give AI to all my employees and just let them go at it. Well, we do have that kind of examples. So we do have clients that actually from a board level say, okay, AI is fully available for everyone because we want to learn. But then they also accept the risk that it brings to the organization. And the risk is not only that you get the wrong answer, but it's also data leakage. is a lot of risk that you can have with external models.
or you get overconfident you fire people like Klarna did and then you have to hire them all back because nothing works. That went too fast too soon. or IBM, right, recently. And Silicon Valley has the slogan, move fast and break things. That is good if you want to do something new, but they never care about cleaning up the mess. And I think if you're a CEO or you're in a board of directors and you're producing something, you cannot do that. I mean, you cannot just break everything and hope that you don't have to clean up any mess. I mean, I think there's very few boards that want to accept that kind of thing. So you need to be a bit aware of what you're doing. The Plan I is one example. We had the Canadian airliner. They made a chatbot, but they were not aware about the risk. So they said, OK, we just put it live in public and then it costed money and then we do it again. So they didn't do a good risk assessment. Because if they had done that, then this would be OK. That would be part of the game. It can go like
So we can't really treat it as the one size fits all. We really have to adapt it case by case where it's the most. I mean, you can build pretty generic infrastructures now. That's another very exciting game. You could theoretically build an infrastructure based on AI that is self-organizing and flexible. So if your task changes, this whole supply chain infrastructure can change. And that is the theory. And for one and a half year back in San Francisco, I a little discussion with Jim Van from the media on this gaming. I'll let that
Dr Bob EngelsAnd he was talking about a game engine that was just sitting in the bottom of the video hardware and you come home and just say, okay, now I want to play something immersive with the space rocket and I want to learn how to steer the space shuttle. And you could just start that game. It would just be made on the fly. Now imagine that you can do that with your supply chain. mean, it's still a bit futuristic, but we are going step by step in such a direction. So how do the most innovative companies internally use AI and product development today as far as the customers you've seen? Well, we have seen interesting cases in CPRD, for example, where AI really plays an important role in identifying communities, emotions of communities, product opportunities related to that, and also proposes for not only the product itself, but also the design and the marketing material in several languages for several parts of the The whole ideation process you can really enhance enormously with that. So like the ideation process.
Michael FinocchiaroI think it's very good for brainstorming in general. And tutoring. Tutoring is also extremely good. When I had a keynote last time in Germany, I don't speak too much German, so I was in the car and I just had a dialogue with Chet GPT about the topic of my keynote. And I was just challenging Chet GPT while riding in this conversational mode as a tutor, as firing partner, to actually challenge me with questions and so on. It was beautiful. teaching people,
That's amazing. And do you think that the use of an AI by a vendor, whether that's a small startup or a big player, does that translate into better AI adoption by the customers themselves? there a lot of AI transformations in side companies that happen because they bring in an external vendor, or really it has to start internally with the mindset in order to take full advantage of it? And that kind of things were really cool.
Dr Bob EngelsWell, yeah, I think that last point is very important. mean, if you cannot just buy an AI culture, I mean, you cannot buy it. But it could serve as a catalyst, perhaps. Well, sometimes we have extremely cool cases that we have implemented before. We have startups that do amazing things as ideation, maybe the first round of a product. But you need also to bring your own internal organization with you. the whole culture needs to be changed. And it's not an AI strategy you need, but it's really this automation strategy. So what does it do with your culture? What does it do with your product strategy? Really every aspect of What do you recommend for companies to do in terms of building internal agents or rely on external tools and APIs? I mean, at what level should they be looking at when you're talking about automation? It should be like innate in cursor kind of stuff or like what you guys I've seen, which is length chain and land graph, like the low level kind of stuff. I think the long term vision is really to have an ecosystem, a worldwide ecosystem like the internet, maybe the internet of agents, where you just collect the agents you need for your tasks. Maybe some of them are your own, maybe some of them are higher end or whatever, maybe open source, maybe they're there. It is really like collect all these agents you need to do your task and let them perform. That is the utopia of this agent thinking.
Michael Finocchiarothe death of SAS.
which is gonna be a major upheaval in terms of billing, right? And in terms of the economics of our industry. The need to be solved, Also, the whole energy consumption of it, not even that. mean, also the discovery of agents. How do you discover the agents you meet? And what about malicious attacks and adversarial attacks with an agent? And the man in the middle attacks. We're already seeing. That is already, Carnegie Mellon just had a research on that where agents were just playing full play just because they needed to fulfill a task. That kind of things will probably happen. You cannot fully control all of that. That reminds me of a talk because you and I met at the Capgemini Engineering Horizons Conference, which I found absolutely fabulous. And there was a woman from, I don't know if you remember that talk from the woman from Oxford University that was talking about robots and the necessity in the robot world for an electronic black box. And so when I thought about using AI and manufacturing engineering, I thought about, okay, so when we build this airplane that crashes or a machine on the software kills somebody because of AI,
How do you roll it back? do you, know, airplanes have a black box, so we can actually figure that out in most cases. But for this prompting, I mean, how do you back up this 25 billion or 77 million parameters and 18,000 prompts? That's really an issue. Is that in terms of guardrails and determinism, I suppose you could talk about deterministic. Yeah, but I mean, if you really go to that utopian scenario of agents, the internet of agents that are really the whole ecosystem of themselves, you get the same problems as in society. I one agent misbehaves. What do you do? How do you actually track it? How do you trace it? And there is a lot of that kind of issues that are unresolved there. We don't have answers to that yet. But at least in some cases where you see that things are... misbehaving. Currently, is still the complexity is still so low that we can actually take it out. But there should be governance structures for that. And this woman from Oxford that you referred to, I forgot the name as well. I'm sorry. Doctor I'll look it up. I'll put it in the post in any case. I'll send it to you. I wrote it. I had an I had a good story around that on how you can actually start to think about the black box so that you at least can trace what happens and the explainability of what happened. But you can also easily create race conditions. We have seen that in 2010 with the stock markets, two agents bidding against each other and the whole New York Stock Exchange crashed within a few microseconds. I don't want to have that on the global scale.
I was at the, ooh, it's raining, the camera's gonna be fine. When I was at the Share PLM conference in Jerez, I don't know if you know Share PLM, Helena Gutierrez, and we had a breakout session on AI, and Rob Ferroni and I were doing an AI one, and it was interesting, in my little group, one of the ideas was maybe, instead of the AI, I was saying, We're raining.
fantastic idea. This is great. We need the skeptical one saying no, that's crap. And that, you know, it's almost like we need to train the models to be a little bit questioning themselves and questioning this and not just trying to please the person that's We do have that implemented in a way, a naive way in O3 for example, OpenAI they do that where you have different agents. So one finds an answer, another one asks for alternative answers, the third one criticizes the whole bunch and the fourth one criticizes another. So yeah, with an LLM if you do it in several spaces so that they actually are competing, it works very good. So you get much better answers. Still not perfect but... You get much better answers. So that works. But now you have the same thing, of course, with with human and culture. So we we have now companies, like you said before, that just want to use it everywhere with no questions asked. We have clients and companies that want to restrict everything. And so, of course, the answer is somewhere in the middle. And we need to get this experience so that we can actually change our cultures. And for that, you need to learn. So you need to be enabled. If you're not enabled, you cannot learn. Do you have any practical tips on companies in terms of building their own domain specific co-pilots?
Dr Bob EngelsWell, don't need to even, I mean, there's several things to that again. I mean, the first thing that was built was the chatbots, right? That is the kind of really query on. Very interesting for many organizations. They don't have that yet. The co-pilots are an integration of that with your productivity tools, your teams environment, whatever you use. But that's a co-pilot. And if you automate that, let this. Here's your documentation.
this copilot automatically change your calendar, make your appointments, read your emails, then you get a kind of an autopilot. And then these can actually communicate again with other agents and then you get a multi-agent system. And so there is a path that you should go through that makes sense so that you both bring your organization up to speed, your people learn, and you see and understand much better what the risks are in your own ecosystem so that you can actually take wise choices, I would say. So for you, are the, in the next two to three years, which is even, know, normally we used to say the five to 10 years, but we can't even say six months. But what's the most exciting stuff you see coming down the pipe and say two to three years before let's say 2028, what is the most exciting stuff you It's difficult to say, but I think one of the things that really will have an impact on all of us is if this automation, the real autonomy really starts to hit us in our daily life. You have, yeah, no, no, I mean, really autonomy, so that some tasks are really taken out of your hands. And if that works well, then I think we will really see a change. You have to a little extent in your car, if something really weird happens in front of you in your new car, it will slam the brakes. going at.
And that is also an autonomous agent that actually is allowed to use the camera and analyze it and actuate and break. So it has agency, has authority and it has autonomy. If these three things come together. Agency, authority and autonomy. I like that. It's also in the white paper that we have. Yeah. AI capabilities do you think are still out of reach for engineering but are probably coming soon? I think everything is within reach for engineering, but the thing is what would really make sense to use. But one of the things that is interesting, which I really like, is the edge AI, so AI on the edge. So what can you really do on hardware with small footprints that makes a lot of sense? Because we are collecting massive amounts. Can we do something on the edge already to actually filter it, combine and merge it, compress it?
Michael Finocchiaroand
Dr Bob Engelsassociated. So what can we do on the edge? think that's for engineering. That's exciting. And if you were investing in an AI first startup, what would it look like? That's very good. If you do it well, like to talk with Jim Fenn, you just need the infrastructure. You can just tell it what it has to do. And we're still waiting, by the way, for that one person unicorn that was promised us for two years ago, I think. So let's see if that is going to be a Tesla story. Hahaha
Michael FinocchiaroSo then I had this other fun thing to do just to wrap up a lightning round that we call it AI snake oil. So in terms of some of the terms that we're throwing around all the time, whether they're overhyped, underrated or spot on. So digital twins. They are at the moment spot on, I would say. AutoML Yeah, what is it in terms of autonomy and autonomous agents? It's progress from there, I would say. Probably overhyped
Dr Bob EngelsI mean, engineering it's probably disgusting. I don't hear it so much in... No, me neither. Explainability. that is underperforming. underperforming. Graf and L. GraphML or GraphRx. It is hybrid AI and I think it is the circle that's closest that we started with. So I think it is not good enough yet. It should be further developed. I think there's a lot of potential.
Michael Finocchiaroor graph rex.
ontologies. also. I mean you need some kind of vocabulary that you need some kind of a semantic common understanding between worlds and ontologies are one of the possibilities that we already developed that you can use for that and from there we will probably develop more. semantic webinar.
prompt engineering. I think that is a bit on the way out. I mean, you use it less and less. But maybe it's also because people get the now. And you get more more systems that create the problems for you. That is underestimated, think. Underestimated. I think with synthetic data, can do quite a lot. It has building. Yeah, thanks.
synthetic data.
Dr Bob Engelsdatasets with synthetic data for testing purposes, for training purposes, also for preventing or changing biases in data. There's nothing such as unbiased data. It doesn't exist. But for your task at hand, you need to neutral bias for the things that are important. It doesn't make sense to have a gender neutral biased dataset for ovarian cancer. It simply doesn't make sense. So these kind of things, we need to work a lot with. Synthetic data has one real drawback if it's used with LLMs, and that is that it can really pollute the internet so much that it becomes obsolete. There was a philosopher for three years ago that wrote a paper, and three years from now the internet will be obsolete because all these things that are created are not perfect. They have whatever, an error margin, and then they're used for training again, so you're just destroying your own data. And somebody else said the last model that was trained on good data was GPT-3. Which I thought was okay. Tell us which day. They were training on that. You
Michael FinocchiaroExactly. Well, that's been great. I appreciate it. Thank you very much, Thank you very much. It was so fun and we'll see you on the next podcast. Bye bye. That was fun.